2025-05-28 16:25:03.843537 | Job console starting 2025-05-28 16:25:03.864495 | Updating git repos 2025-05-28 16:25:03.932320 | Cloning repos into workspace 2025-05-28 16:25:04.165954 | Restoring repo states 2025-05-28 16:25:04.220377 | Merging changes 2025-05-28 16:25:04.220406 | Checking out repos 2025-05-28 16:25:04.644623 | Preparing playbooks 2025-05-28 16:25:05.348782 | Running Ansible setup 2025-05-28 16:25:09.780082 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-05-28 16:25:10.551450 | 2025-05-28 16:25:10.551662 | PLAY [Base pre] 2025-05-28 16:25:10.570316 | 2025-05-28 16:25:10.570510 | TASK [Setup log path fact] 2025-05-28 16:25:10.601812 | orchestrator | ok 2025-05-28 16:25:10.621368 | 2025-05-28 16:25:10.621567 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-05-28 16:25:10.664700 | orchestrator | ok 2025-05-28 16:25:10.678735 | 2025-05-28 16:25:10.678927 | TASK [emit-job-header : Print job information] 2025-05-28 16:25:10.737940 | # Job Information 2025-05-28 16:25:10.738361 | Ansible Version: 2.16.14 2025-05-28 16:25:10.738434 | Job: testbed-deploy-in-a-nutshell-ubuntu-24.04 2025-05-28 16:25:10.738504 | Pipeline: post 2025-05-28 16:25:10.738551 | Executor: 521e9411259a 2025-05-28 16:25:10.738594 | Triggered by: https://github.com/osism/testbed/commit/d4af9e994431d1172193ab26d392d9877c6199ef 2025-05-28 16:25:10.738640 | Event ID: 4aaf16c8-3be0-11f0-93b3-7b98521ce2ca 2025-05-28 16:25:10.751679 | 2025-05-28 16:25:10.751902 | LOOP [emit-job-header : Print node information] 2025-05-28 16:25:10.888432 | orchestrator | ok: 2025-05-28 16:25:10.888887 | orchestrator | # Node Information 2025-05-28 16:25:10.889038 | orchestrator | Inventory Hostname: orchestrator 2025-05-28 16:25:10.889113 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-05-28 16:25:10.889172 | orchestrator | Username: zuul-testbed04 2025-05-28 16:25:10.889227 | orchestrator | Distro: Debian 12.11 2025-05-28 16:25:10.889289 | orchestrator | Provider: static-testbed 2025-05-28 16:25:10.889346 | orchestrator | Region: 2025-05-28 16:25:10.889400 | orchestrator | Label: testbed-orchestrator 2025-05-28 16:25:10.889451 | orchestrator | Product Name: OpenStack Nova 2025-05-28 16:25:10.889501 | orchestrator | Interface IP: 81.163.193.140 2025-05-28 16:25:10.904933 | 2025-05-28 16:25:10.905134 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-05-28 16:25:11.458356 | orchestrator -> localhost | changed 2025-05-28 16:25:11.467846 | 2025-05-28 16:25:11.467998 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-05-28 16:25:12.569351 | orchestrator -> localhost | changed 2025-05-28 16:25:12.591612 | 2025-05-28 16:25:12.591756 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-05-28 16:25:12.914961 | orchestrator -> localhost | ok 2025-05-28 16:25:12.928206 | 2025-05-28 16:25:12.928407 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-05-28 16:25:12.975303 | orchestrator | ok 2025-05-28 16:25:12.995157 | orchestrator | included: /var/lib/zuul/builds/86f999a7dd444367bef7a55bf5f49ef2/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-05-28 16:25:13.003670 | 2025-05-28 16:25:13.003799 | TASK [add-build-sshkey : Create Temp SSH key] 2025-05-28 16:25:14.264171 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-05-28 16:25:14.264515 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/86f999a7dd444367bef7a55bf5f49ef2/work/86f999a7dd444367bef7a55bf5f49ef2_id_rsa 2025-05-28 16:25:14.264562 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/86f999a7dd444367bef7a55bf5f49ef2/work/86f999a7dd444367bef7a55bf5f49ef2_id_rsa.pub 2025-05-28 16:25:14.264591 | orchestrator -> localhost | The key fingerprint is: 2025-05-28 16:25:14.264616 | orchestrator -> localhost | SHA256:hwn+YfAEyaUSjOI9/MCj4xNmuJJHqdI0fTt1tw1KyAw zuul-build-sshkey 2025-05-28 16:25:14.264639 | orchestrator -> localhost | The key's randomart image is: 2025-05-28 16:25:14.264675 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-05-28 16:25:14.264699 | orchestrator -> localhost | | o...o. | 2025-05-28 16:25:14.264721 | orchestrator -> localhost | |. . ..oo | 2025-05-28 16:25:14.264743 | orchestrator -> localhost | |..+ . + . | 2025-05-28 16:25:14.264764 | orchestrator -> localhost | | . B oE= o | 2025-05-28 16:25:14.264784 | orchestrator -> localhost | |. .o= .+S.. | 2025-05-28 16:25:14.264809 | orchestrator -> localhost | |.*= ...o=oo o | 2025-05-28 16:25:14.264830 | orchestrator -> localhost | |+Bo. . o.o o + | 2025-05-28 16:25:14.264851 | orchestrator -> localhost | |*oo o . . . | 2025-05-28 16:25:14.264872 | orchestrator -> localhost | |o.. . | 2025-05-28 16:25:14.264892 | orchestrator -> localhost | +----[SHA256]-----+ 2025-05-28 16:25:14.264957 | orchestrator -> localhost | ok: Runtime: 0:00:00.724125 2025-05-28 16:25:14.273511 | 2025-05-28 16:25:14.273638 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-05-28 16:25:14.309055 | orchestrator | ok 2025-05-28 16:25:14.330190 | orchestrator | included: /var/lib/zuul/builds/86f999a7dd444367bef7a55bf5f49ef2/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-05-28 16:25:14.342474 | 2025-05-28 16:25:14.342610 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-05-28 16:25:14.377597 | orchestrator | skipping: Conditional result was False 2025-05-28 16:25:14.386760 | 2025-05-28 16:25:14.386918 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-05-28 16:25:15.067023 | orchestrator | changed 2025-05-28 16:25:15.076218 | 2025-05-28 16:25:15.076365 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-05-28 16:25:15.387549 | orchestrator | ok 2025-05-28 16:25:15.397692 | 2025-05-28 16:25:15.397831 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-05-28 16:25:15.838216 | orchestrator | ok 2025-05-28 16:25:15.846293 | 2025-05-28 16:25:15.846424 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-05-28 16:25:16.239161 | orchestrator | ok 2025-05-28 16:25:16.248305 | 2025-05-28 16:25:16.248451 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-05-28 16:25:16.283599 | orchestrator | skipping: Conditional result was False 2025-05-28 16:25:16.294342 | 2025-05-28 16:25:16.294479 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-05-28 16:25:16.785790 | orchestrator -> localhost | changed 2025-05-28 16:25:16.802505 | 2025-05-28 16:25:16.802646 | TASK [add-build-sshkey : Add back temp key] 2025-05-28 16:25:17.172255 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/86f999a7dd444367bef7a55bf5f49ef2/work/86f999a7dd444367bef7a55bf5f49ef2_id_rsa (zuul-build-sshkey) 2025-05-28 16:25:17.172856 | orchestrator -> localhost | ok: Runtime: 0:00:00.012479 2025-05-28 16:25:17.189638 | 2025-05-28 16:25:17.189812 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-05-28 16:25:17.625026 | orchestrator | ok 2025-05-28 16:25:17.632858 | 2025-05-28 16:25:17.633002 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-05-28 16:25:17.667465 | orchestrator | skipping: Conditional result was False 2025-05-28 16:25:17.731678 | 2025-05-28 16:25:17.731832 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-05-28 16:25:18.164133 | orchestrator | ok 2025-05-28 16:25:18.185161 | 2025-05-28 16:25:18.185344 | TASK [validate-host : Define zuul_info_dir fact] 2025-05-28 16:25:18.244879 | orchestrator | ok 2025-05-28 16:25:18.256363 | 2025-05-28 16:25:18.256504 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-05-28 16:25:18.565370 | orchestrator -> localhost | ok 2025-05-28 16:25:18.574356 | 2025-05-28 16:25:18.574469 | TASK [validate-host : Collect information about the host] 2025-05-28 16:25:19.819819 | orchestrator | ok 2025-05-28 16:25:19.837115 | 2025-05-28 16:25:19.837236 | TASK [validate-host : Sanitize hostname] 2025-05-28 16:25:19.915217 | orchestrator | ok 2025-05-28 16:25:19.924037 | 2025-05-28 16:25:19.924192 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-05-28 16:25:20.529007 | orchestrator -> localhost | changed 2025-05-28 16:25:20.545800 | 2025-05-28 16:25:20.546026 | TASK [validate-host : Collect information about zuul worker] 2025-05-28 16:25:20.990082 | orchestrator | ok 2025-05-28 16:25:21.001388 | 2025-05-28 16:25:21.001581 | TASK [validate-host : Write out all zuul information for each host] 2025-05-28 16:25:21.614911 | orchestrator -> localhost | changed 2025-05-28 16:25:21.633705 | 2025-05-28 16:25:21.633872 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-05-28 16:25:21.922133 | orchestrator | ok 2025-05-28 16:25:21.928943 | 2025-05-28 16:25:21.929091 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-05-28 16:26:02.975427 | orchestrator | changed: 2025-05-28 16:26:02.976552 | orchestrator | .d..t...... src/ 2025-05-28 16:26:02.976608 | orchestrator | .d..t...... src/github.com/ 2025-05-28 16:26:02.976636 | orchestrator | .d..t...... src/github.com/osism/ 2025-05-28 16:26:02.976659 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-05-28 16:26:02.976682 | orchestrator | RedHat.yml 2025-05-28 16:26:03.005913 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-05-28 16:26:03.005946 | orchestrator | RedHat.yml 2025-05-28 16:26:03.006001 | orchestrator | = 1.53.0"... 2025-05-28 16:26:18.822126 | orchestrator | 16:26:18.821 STDOUT terraform: - Finding hashicorp/local versions matching ">= 2.2.0"... 2025-05-28 16:26:18.893893 | orchestrator | 16:26:18.893 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-05-28 16:26:20.225823 | orchestrator | 16:26:20.225 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.1.0... 2025-05-28 16:26:21.538320 | orchestrator | 16:26:21.538 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.1.0 (signed, key ID 4F80527A391BEFD2) 2025-05-28 16:26:22.464984 | orchestrator | 16:26:22.464 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-05-28 16:26:23.360067 | orchestrator | 16:26:23.359 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-05-28 16:26:24.289662 | orchestrator | 16:26:24.289 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-05-28 16:26:25.348163 | orchestrator | 16:26:25.347 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-05-28 16:26:25.348286 | orchestrator | 16:26:25.348 STDOUT terraform: Providers are signed by their developers. 2025-05-28 16:26:25.348417 | orchestrator | 16:26:25.348 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-05-28 16:26:25.348474 | orchestrator | 16:26:25.348 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-05-28 16:26:25.348658 | orchestrator | 16:26:25.348 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-05-28 16:26:25.348845 | orchestrator | 16:26:25.348 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-05-28 16:26:25.348937 | orchestrator | 16:26:25.348 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-05-28 16:26:25.349085 | orchestrator | 16:26:25.348 STDOUT terraform: you run "tofu init" in the future. 2025-05-28 16:26:25.349177 | orchestrator | 16:26:25.349 STDOUT terraform: OpenTofu has been successfully initialized! 2025-05-28 16:26:25.349324 | orchestrator | 16:26:25.349 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-05-28 16:26:25.349459 | orchestrator | 16:26:25.349 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-05-28 16:26:25.349503 | orchestrator | 16:26:25.349 STDOUT terraform: should now work. 2025-05-28 16:26:25.349673 | orchestrator | 16:26:25.349 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-05-28 16:26:25.349811 | orchestrator | 16:26:25.349 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-05-28 16:26:25.349939 | orchestrator | 16:26:25.349 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-05-28 16:26:25.547982 | orchestrator | 16:26:25.547 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed04/terraform` instead. 2025-05-28 16:26:25.797496 | orchestrator | 16:26:25.797 STDOUT terraform: Created and switched to workspace "ci"! 2025-05-28 16:26:25.797606 | orchestrator | 16:26:25.797 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-05-28 16:26:25.797615 | orchestrator | 16:26:25.797 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-05-28 16:26:25.797620 | orchestrator | 16:26:25.797 STDOUT terraform: for this configuration. 2025-05-28 16:26:26.027837 | orchestrator | 16:26:26.025 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed04/terraform` instead. 2025-05-28 16:26:26.134408 | orchestrator | 16:26:26.134 STDOUT terraform: ci.auto.tfvars 2025-05-28 16:26:26.146233 | orchestrator | 16:26:26.146 STDOUT terraform: default_custom.tf 2025-05-28 16:26:26.344066 | orchestrator | 16:26:26.343 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed04/terraform` instead. 2025-05-28 16:26:27.237570 | orchestrator | 16:26:27.233 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-05-28 16:26:27.786251 | orchestrator | 16:26:27.785 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-05-28 16:26:28.024031 | orchestrator | 16:26:28.023 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-05-28 16:26:28.024129 | orchestrator | 16:26:28.023 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-05-28 16:26:28.024136 | orchestrator | 16:26:28.023 STDOUT terraform:  + create 2025-05-28 16:26:28.024143 | orchestrator | 16:26:28.024 STDOUT terraform:  <= read (data resources) 2025-05-28 16:26:28.024149 | orchestrator | 16:26:28.024 STDOUT terraform: OpenTofu will perform the following actions: 2025-05-28 16:26:28.024155 | orchestrator | 16:26:28.024 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-05-28 16:26:28.024161 | orchestrator | 16:26:28.024 STDOUT terraform:  # (config refers to values not yet known) 2025-05-28 16:26:28.024168 | orchestrator | 16:26:28.024 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-05-28 16:26:28.024205 | orchestrator | 16:26:28.024 STDOUT terraform:  + checksum = (known after apply) 2025-05-28 16:26:28.024235 | orchestrator | 16:26:28.024 STDOUT terraform:  + created_at = (known after apply) 2025-05-28 16:26:28.024267 | orchestrator | 16:26:28.024 STDOUT terraform:  + file = (known after apply) 2025-05-28 16:26:28.024298 | orchestrator | 16:26:28.024 STDOUT terraform:  + id = (known after apply) 2025-05-28 16:26:28.024333 | orchestrator | 16:26:28.024 STDOUT terraform:  + metadata = (known after apply) 2025-05-28 16:26:28.024363 | orchestrator | 16:26:28.024 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-05-28 16:26:28.024397 | orchestrator | 16:26:28.024 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-05-28 16:26:28.024420 | orchestrator | 16:26:28.024 STDOUT terraform:  + most_recent = true 2025-05-28 16:26:28.024445 | orchestrator | 16:26:28.024 STDOUT terraform:  + name = (known after apply) 2025-05-28 16:26:28.024478 | orchestrator | 16:26:28.024 STDOUT terraform:  + protected = (known after apply) 2025-05-28 16:26:28.024511 | orchestrator | 16:26:28.024 STDOUT terraform:  + region = (known after apply) 2025-05-28 16:26:28.024539 | orchestrator | 16:26:28.024 STDOUT terraform:  + schema = (known after apply) 2025-05-28 16:26:28.024582 | orchestrator | 16:26:28.024 STDOUT terraform:  + size_bytes = (known after apply) 2025-05-28 16:26:28.024616 | orchestrator | 16:26:28.024 STDOUT terraform:  + tags = (known after apply) 2025-05-28 16:26:28.024648 | orchestrator | 16:26:28.024 STDOUT terraform:  + updated_at = (known after apply) 2025-05-28 16:26:28.024656 | orchestrator | 16:26:28.024 STDOUT terraform:  } 2025-05-28 16:26:28.024709 | orchestrator | 16:26:28.024 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-05-28 16:26:28.024742 | orchestrator | 16:26:28.024 STDOUT terraform:  # (config refers to values not yet known) 2025-05-28 16:26:28.024783 | orchestrator | 16:26:28.024 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-05-28 16:26:28.024813 | orchestrator | 16:26:28.024 STDOUT terraform:  + checksum = (known after apply) 2025-05-28 16:26:28.024844 | orchestrator | 16:26:28.024 STDOUT terraform:  + created_at = (known after apply) 2025-05-28 16:26:28.024877 | orchestrator | 16:26:28.024 STDOUT terraform:  + file = (known after apply) 2025-05-28 16:26:28.024909 | orchestrator | 16:26:28.024 STDOUT terraform:  + id = (known after apply) 2025-05-28 16:26:28.024946 | orchestrator | 16:26:28.024 STDOUT terraform:  + metadata = (known after apply) 2025-05-28 16:26:28.024980 | orchestrator | 16:26:28.024 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-05-28 16:26:28.025012 | orchestrator | 16:26:28.024 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-05-28 16:26:28.025034 | orchestrator | 16:26:28.025 STDOUT terraform:  + most_recent = true 2025-05-28 16:26:28.025065 | orchestrator | 16:26:28.025 STDOUT terraform:  + name = (known after apply) 2025-05-28 16:26:28.025097 | orchestrator | 16:26:28.025 STDOUT terraform:  + protected = (known after apply) 2025-05-28 16:26:28.025132 | orchestrator | 16:26:28.025 STDOUT terraform:  + region = (known after apply) 2025-05-28 16:26:28.025161 | orchestrator | 16:26:28.025 STDOUT terraform:  + schema = (known after apply) 2025-05-28 16:26:28.025197 | orchestrator | 16:26:28.025 STDOUT terraform:  + size_bytes = (known after apply) 2025-05-28 16:26:28.025222 | orchestrator | 16:26:28.025 STDOUT terraform:  + tags = (known after apply) 2025-05-28 16:26:28.025253 | orchestrator | 16:26:28.025 STDOUT terraform:  + updated_at = (known after apply) 2025-05-28 16:26:28.025260 | orchestrator | 16:26:28.025 STDOUT terraform:  } 2025-05-28 16:26:28.025313 | orchestrator | 16:26:28.025 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-05-28 16:26:28.025345 | orchestrator | 16:26:28.025 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-05-28 16:26:28.025383 | orchestrator | 16:26:28.025 STDOUT terraform:  + content = (known after apply) 2025-05-28 16:26:28.025422 | orchestrator | 16:26:28.025 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-28 16:26:28.025461 | orchestrator | 16:26:28.025 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-28 16:26:28.025497 | orchestrator | 16:26:28.025 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-28 16:26:28.025536 | orchestrator | 16:26:28.025 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-28 16:26:28.025589 | orchestrator | 16:26:28.025 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-28 16:26:28.025625 | orchestrator | 16:26:28.025 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-28 16:26:28.025655 | orchestrator | 16:26:28.025 STDOUT terraform:  + directory_permission = "0777" 2025-05-28 16:26:28.025680 | orchestrator | 16:26:28.025 STDOUT terraform:  + file_permission = "0644" 2025-05-28 16:26:28.025720 | orchestrator | 16:26:28.025 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-05-28 16:26:28.025757 | orchestrator | 16:26:28.025 STDOUT terraform:  + id = (known after apply) 2025-05-28 16:26:28.025765 | orchestrator | 16:26:28.025 STDOUT terraform:  } 2025-05-28 16:26:28.025802 | orchestrator | 16:26:28.025 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-05-28 16:26:28.025828 | orchestrator | 16:26:28.025 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-05-28 16:26:28.025869 | orchestrator | 16:26:28.025 STDOUT terraform:  + content = (known after apply) 2025-05-28 16:26:28.025907 | orchestrator | 16:26:28.025 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-28 16:26:28.025943 | orchestrator | 16:26:28.025 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-28 16:26:28.025981 | orchestrator | 16:26:28.025 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-28 16:26:28.026044 | orchestrator | 16:26:28.025 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-28 16:26:28.026074 | orchestrator | 16:26:28.026 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-28 16:26:28.026114 | orchestrator | 16:26:28.026 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-28 16:26:28.026144 | orchestrator | 16:26:28.026 STDOUT terraform:  + directory_permission = "0777" 2025-05-28 16:26:28.026176 | orchestrator | 16:26:28.026 STDOUT terraform:  + file_permission = "0644" 2025-05-28 16:26:28.026211 | orchestrator | 16:26:28.026 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-05-28 16:26:28.026252 | orchestrator | 16:26:28.026 STDOUT terraform:  + id = (known after apply) 2025-05-28 16:26:28.026259 | orchestrator | 16:26:28.026 STDOUT terraform:  } 2025-05-28 16:26:28.026292 | orchestrator | 16:26:28.026 STDOUT terraform:  # local_file.inventory will be created 2025-05-28 16:26:28.026319 | orchestrator | 16:26:28.026 STDOUT terraform:  + resource "local_file" "inventory" { 2025-05-28 16:26:28.026357 | orchestrator | 16:26:28.026 STDOUT terraform:  + content = (known after apply) 2025-05-28 16:26:28.026393 | orchestrator | 16:26:28.026 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-28 16:26:28.026429 | orchestrator | 16:26:28.026 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-28 16:26:28.026469 | orchestrator | 16:26:28.026 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-28 16:26:28.026512 | orchestrator | 16:26:28.026 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-28 16:26:28.026566 | orchestrator | 16:26:28.026 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-28 16:26:28.026615 | orchestrator | 16:26:28.026 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-28 16:26:28.026643 | orchestrator | 16:26:28.026 STDOUT terraform:  + directory_permission = "0777" 2025-05-28 16:26:28.026673 | orchestrator | 16:26:28.026 STDOUT terraform:  + file_permission = "0644" 2025-05-28 16:26:28.026708 | orchestrator | 16:26:28.026 STDOUT terraform:  + filename = "inventory.ci" 2025-05-28 16:26:28.026748 | orchestrator | 16:26:28.026 STDOUT terraform:  + id = (known after apply) 2025-05-28 16:26:28.026755 | orchestrator | 16:26:28.026 STDOUT terraform:  } 2025-05-28 16:26:28.026793 | orchestrator | 16:26:28.026 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-05-28 16:26:28.026828 | orchestrator | 16:26:28.026 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-05-28 16:26:28.026863 | orchestrator | 16:26:28.026 STDOUT terraform:  + content = (sensitive value) 2025-05-28 16:26:28.026903 | orchestrator | 16:26:28.026 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-28 16:26:28.026942 | orchestrator | 16:26:28.026 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-28 16:26:28.026979 | orchestrator | 16:26:28.026 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-28 16:26:28.027020 | orchestrator | 16:26:28.026 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-28 16:26:28.027056 | orchestrator | 16:26:28.027 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-28 16:26:28.027093 | orchestrator | 16:26:28.027 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-28 16:26:28.027121 | orchestrator | 16:26:28.027 STDOUT terraform:  + directory_permission = "0700" 2025-05-28 16:26:28.027148 | orchestrator | 16:26:28.027 STDOUT terraform:  + file_permission = "0600" 2025-05-28 16:26:28.027181 | orchestrator | 16:26:28.027 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-05-28 16:26:28.027220 | orchestrator | 16:26:28.027 STDOUT terraform:  + id = (known after apply) 2025-05-28 16:26:28.027228 | orchestrator | 16:26:28.027 STDOUT terraform:  } 2025-05-28 16:26:28.027266 | orchestrator | 16:26:28.027 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-05-28 16:26:28.027299 | orchestrator | 16:26:28.027 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-05-28 16:26:28.027322 | orchestrator | 16:26:28.027 STDOUT terraform:  + id = (known after apply) 2025-05-28 16:26:28.027330 | orchestrator | 16:26:28.027 STDOUT terraform:  } 2025-05-28 16:26:28.027385 | orchestrator | 16:26:28.027 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-05-28 16:26:28.027436 | orchestrator | 16:26:28.027 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-05-28 16:26:28.027474 | orchestrator | 16:26:28.027 STDOUT terraform:  + attachment = (known after apply) 2025-05-28 16:26:28.027500 | orchestrator | 16:26:28.027 STDOUT terraform:  + availability_zone = "nova" 2025-05-28 16:26:28.027540 | orchestrator | 16:26:28.027 STDOUT terraform:  + id = (known after apply) 2025-05-28 16:26:28.027588 | orchestrator | 16:26:28.027 STDOUT terraform:  + image_id = (known after apply) 2025-05-28 16:26:28.027628 | orchestrator | 16:26:28.027 STDOUT terraform:  + metadata = (known after apply) 2025-05-28 16:26:28.027675 | orchestrator | 16:26:28.027 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-05-28 16:26:28.027724 | orchestrator | 16:26:28.027 STDOUT terraform:  + region = (known after apply) 2025-05-28 16:26:28.027731 | orchestrator | 16:26:28.027 STDOUT terraform:  + size = 80 2025-05-28 16:26:28.027771 | orchestrator | 16:26:28.027 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-28 16:26:28.027797 | orchestrator | 16:26:28.027 STDOUT terraform:  + volume_type = "ssd" 2025-05-28 16:26:28.027804 | orchestrator | 16:26:28.027 STDOUT terraform:  } 2025-05-28 16:26:28.027859 | orchestrator | 16:26:28.027 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-05-28 16:26:28.027910 | orchestrator | 16:26:28.027 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-28 16:26:28.027954 | orchestrator | 16:26:28.027 STDOUT terraform:  + attachment = (known after apply) 2025-05-28 16:26:28.027982 | orchestrator | 16:26:28.027 STDOUT terraform:  + availability_zone = "nova" 2025-05-28 16:26:28.028020 | orchestrator | 16:26:28.027 STDOUT terraform:  + id = (known after apply) 2025-05-28 16:26:28.028057 | orchestrator | 16:26:28.028 STDOUT terraform:  + image_id = (known after apply) 2025-05-28 16:26:28.028097 | orchestrator | 16:26:28.028 STDOUT terraform:  + metadata = (known after apply) 2025-05-28 16:26:28.028145 | orchestrator | 16:26:28.028 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-05-28 16:26:28.028183 | orchestrator | 16:26:28.028 STDOUT terraform:  + region = (known after apply) 2025-05-28 16:26:28.028206 | orchestrator | 16:26:28.028 STDOUT terraform:  + size = 80 2025-05-28 16:26:28.028236 | orchestrator | 16:26:28.028 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-28 16:26:28.028262 | orchestrator | 16:26:28.028 STDOUT terraform:  + volume_type = "ssd" 2025-05-28 16:26:28.028269 | orchestrator | 16:26:28.028 STDOUT terraform:  } 2025-05-28 16:26:28.028321 | orchestrator | 16:26:28.028 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-05-28 16:26:28.028370 | orchestrator | 16:26:28.028 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-28 16:26:28.028412 | orchestrator | 16:26:28.028 STDOUT terraform:  + attachment = (known after apply) 2025-05-28 16:26:28.028434 | orchestrator | 16:26:28.028 STDOUT terraform:  + availability_zone = "nova" 2025-05-28 16:26:28.028472 | orchestrator | 16:26:28.028 STDOUT terraform:  + id = (known after apply) 2025-05-28 16:26:28.028509 | orchestrator | 16:26:28.028 STDOUT terraform:  + image_id = (known after apply) 2025-05-28 16:26:28.028558 | orchestrator | 16:26:28.028 STDOUT terraform:  + metadata = (known after apply) 2025-05-28 16:26:28.028616 | orchestrator | 16:26:28.028 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-05-28 16:26:28.028653 | orchestrator | 16:26:28.028 STDOUT terraform:  + region = (known after apply) 2025-05-28 16:26:28.028679 | orchestrator | 16:26:28.028 STDOUT terraform:  + size = 80 2025-05-28 16:26:28.028706 | orchestrator | 16:26:28.028 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-28 16:26:28.028732 | orchestrator | 16:26:28.028 STDOUT terraform:  + volume_type = "ssd" 2025-05-28 16:26:28.028740 | orchestrator | 16:26:28.028 STDOUT terraform:  } 2025-05-28 16:26:28.028828 | orchestrator | 16:26:28.028 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-05-28 16:26:28.028874 | orchestrator | 16:26:28.028 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-28 16:26:28.028912 | orchestrator | 16:26:28.028 STDOUT terraform:  + attachment = (known after apply) 2025-05-28 16:26:28.028939 | orchestrator | 16:26:28.028 STDOUT terraform:  + availability_zone = "nova" 2025-05-28 16:26:28.028978 | orchestrator | 16:26:28.028 STDOUT terraform:  + id = (known after apply) 2025-05-28 16:26:28.029015 | orchestrator | 16:26:28.028 STDOUT terraform:  + image_id = (known after apply) 2025-05-28 16:26:28.029051 | orchestrator | 16:26:28.029 STDOUT terraform:  + metadata = (known after apply) 2025-05-28 16:26:28.029099 | orchestrator | 16:26:28.029 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-05-28 16:26:28.029138 | orchestrator | 16:26:28.029 STDOUT terraform:  + region = (known after apply) 2025-05-28 16:26:28.029162 | orchestrator | 16:26:28.029 STDOUT terraform:  + size = 80 2025-05-28 16:26:28.029187 | orchestrator | 16:26:28.029 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-28 16:26:28.029213 | orchestrator | 16:26:28.029 STDOUT terraform:  + volume_type = "ssd" 2025-05-28 16:26:28.029221 | orchestrator | 16:26:28.029 STDOUT terraform:  } 2025-05-28 16:26:28.029273 | orchestrator | 16:26:28.029 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-05-28 16:26:28.029318 | orchestrator | 16:26:28.029 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-28 16:26:28.029359 | orchestrator | 16:26:28.029 STDOUT terraform:  + attachment = (known after apply) 2025-05-28 16:26:28.029385 | orchestrator | 16:26:28.029 STDOUT terraform:  + availability_zone = "nova" 2025-05-28 16:26:28.029422 | orchestrator | 16:26:28.029 STDOUT terraform:  + id = (known after apply) 2025-05-28 16:26:28.029459 | orchestrator | 16:26:28.029 STDOUT terraform:  + image_id = (known after apply) 2025-05-28 16:26:28.029495 | orchestrator | 16:26:28.029 STDOUT terraform:  + metadata = (known after apply) 2025-05-28 16:26:28.029541 | orchestrator | 16:26:28.029 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-05-28 16:26:28.029607 | orchestrator | 16:26:28.029 STDOUT terraform:  + region = (known after apply) 2025-05-28 16:26:28.029622 | orchestrator | 16:26:28.029 STDOUT terraform:  + size = 80 2025-05-28 16:26:28.029654 | orchestrator | 16:26:28.029 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-28 16:26:28.029680 | orchestrator | 16:26:28.029 STDOUT terraform:  + volume_type = "ssd" 2025-05-28 16:26:28.029688 | orchestrator | 16:26:28.029 STDOUT terraform:  } 2025-05-28 16:26:28.029738 | orchestrator | 16:26:28.029 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-05-28 16:26:28.029785 | orchestrator | 16:26:28.029 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-28 16:26:28.029824 | orchestrator | 16:26:28.029 STDOUT terraform:  + attachment = (known after apply) 2025-05-28 16:26:28.029849 | orchestrator | 16:26:28.029 STDOUT terraform:  + availability_zone = "nova" 2025-05-28 16:26:28.029888 | orchestrator | 16:26:28.029 STDOUT terraform:  + id = (known after apply) 2025-05-28 16:26:28.029927 | orchestrator | 16:26:28.029 STDOUT terraform:  + image_id = (known after apply) 2025-05-28 16:26:28.029966 | orchestrator | 16:26:28.029 STDOUT terraform:  + metadata = (known after apply) 2025-05-28 16:26:28.030038 | orchestrator | 16:26:28.029 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-05-28 16:26:28.030067 | orchestrator | 16:26:28.030 STDOUT terraform:  + region = (known after apply) 2025-05-28 16:26:28.030092 | orchestrator | 16:26:28.030 STDOUT terraform:  + size = 80 2025-05-28 16:26:28.030121 | orchestrator | 16:26:28.030 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-28 16:26:28.030147 | orchestrator | 16:26:28.030 STDOUT terraform:  + volume_type = "ssd" 2025-05-28 16:26:28.030154 | orchestrator | 16:26:28.030 STDOUT terraform:  } 2025-05-28 16:26:28.030207 | orchestrator | 16:26:28.030 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-05-28 16:26:28.030254 | orchestrator | 16:26:28.030 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-28 16:26:28.030292 | orchestrator | 16:26:28.030 STDOUT terraform:  + attachment = (known after apply) 2025-05-28 16:26:28.030318 | orchestrator | 16:26:28.030 STDOUT terraform:  + availability_zone = "nova" 2025-05-28 16:26:28.030355 | orchestrator | 16:26:28.030 STDOUT terraform:  + id = (known after apply) 2025-05-28 16:26:28.030392 | orchestrator | 16:26:28.030 STDOUT terraform:  + image_id = (known after apply) 2025-05-28 16:26:28.030430 | orchestrator | 16:26:28.030 STDOUT terraform:  + metadata = (known after apply) 2025-05-28 16:26:28.030476 | orchestrator | 16:26:28.030 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-05-28 16:26:28.030514 | orchestrator | 16:26:28.030 STDOUT terraform:  + region = (known after apply) 2025-05-28 16:26:28.030536 | orchestrator | 16:26:28.030 STDOUT terraform:  + size = 80 2025-05-28 16:26:28.030599 | orchestrator | 16:26:28.030 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-28 16:26:28.030622 | orchestrator | 16:26:28.030 STDOUT terraform:  + volume_type = "ssd" 2025-05-28 16:26:28.030630 | orchestrator | 16:26:28.030 STDOUT terraform:  } 2025-05-28 16:26:28.030679 | orchestrator | 16:26:28.030 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-05-28 16:26:28.030727 | orchestrator | 16:26:28.030 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-28 16:26:28.030769 | orchestrator | 16:26:28.030 STDOUT terraform:  + attachment = (known after apply) 2025-05-28 16:26:28.030795 | orchestrator | 16:26:28.030 STDOUT terraform:  + availability_zone = "nova" 2025-05-28 16:26:28.030834 | orchestrator | 16:26:28.030 STDOUT terraform:  + id = (known after apply) 2025-05-28 16:26:28.030872 | orchestrator | 16:26:28.030 STDOUT terraform:  + metadata = (known after apply) 2025-05-28 16:26:28.030912 | orchestrator | 16:26:28.030 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-05-28 16:26:28.030951 | orchestrator | 16:26:28.030 STDOUT terraform:  + region = (known after apply) 2025-05-28 16:26:28.030974 | orchestrator | 16:26:28.030 STDOUT terraform:  + size = 20 2025-05-28 16:26:28.031001 | orchestrator | 16:26:28.030 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-28 16:26:28.031030 | orchestrator | 16:26:28.030 STDOUT terraform:  + volume_type = "ssd" 2025-05-28 16:26:28.031037 | orchestrator | 16:26:28.031 STDOUT terraform:  } 2025-05-28 16:26:28.031087 | orchestrator | 16:26:28.031 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-05-28 16:26:28.031137 | orchestrator | 16:26:28.031 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-28 16:26:28.031174 | orchestrator | 16:26:28.031 STDOUT terraform:  + attachment = (known after apply) 2025-05-28 16:26:28.031198 | orchestrator | 16:26:28.031 STDOUT terraform:  + availability_zone = "nova" 2025-05-28 16:26:28.031236 | orchestrator | 16:26:28.031 STDOUT terraform:  + id = (known after apply) 2025-05-28 16:26:28.031273 | orchestrator | 16:26:28.031 STDOUT terraform:  + metadata = (known after apply) 2025-05-28 16:26:28.031314 | orchestrator | 16:26:28.031 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-05-28 16:26:28.031354 | orchestrator | 16:26:28.031 STDOUT terraform:  + region = (known after apply) 2025-05-28 16:26:28.031377 | orchestrator | 16:26:28.031 STDOUT terraform:  + size = 20 2025-05-28 16:26:28.031403 | orchestrator | 16:26:28.031 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-28 16:26:28.031429 | orchestrator | 16:26:28.031 STDOUT terraform:  + volume_type = "ssd" 2025-05-28 16:26:28.031436 | orchestrator | 16:26:28.031 STDOUT terraform:  } 2025-05-28 16:26:28.031484 | orchestrator | 16:26:28.031 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-05-28 16:26:28.031529 | orchestrator | 16:26:28.031 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-28 16:26:28.031576 | orchestrator | 16:26:28.031 STDOUT terraform:  + attachment = (known after apply) 2025-05-28 16:26:28.031605 | orchestrator | 16:26:28.031 STDOUT terraform:  + availability_zone = "nova" 2025-05-28 16:26:28.031643 | orchestrator | 16:26:28.031 STDOUT terraform:  + id = (known after apply) 2025-05-28 16:26:28.031681 | orchestrator | 16:26:28.031 STDOUT terraform:  + metadata = (known after apply) 2025-05-28 16:26:28.031722 | orchestrator | 16:26:28.031 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-05-28 16:26:28.031758 | orchestrator | 16:26:28.031 STDOUT terraform:  + region = (known after apply) 2025-05-28 16:26:28.031780 | orchestrator | 16:26:28.031 STDOUT terraform:  + size = 20 2025-05-28 16:26:28.031808 | orchestrator | 16:26:28.031 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-28 16:26:28.031835 | orchestrator | 16:26:28.031 STDOUT terraform:  + volume_type = "ssd" 2025-05-28 16:26:28.031843 | orchestrator | 16:26:28.031 STDOUT terraform:  } 2025-05-28 16:26:28.031892 | orchestrator | 16:26:28.031 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-05-28 16:26:28.031939 | orchestrator | 16:26:28.031 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-28 16:26:28.031974 | orchestrator | 16:26:28.031 STDOUT terraform:  + attachment = (known after apply) 2025-05-28 16:26:28.032004 | orchestrator | 16:26:28.031 STDOUT terraform:  + availability_zone = "nova" 2025-05-28 16:26:28.032044 | orchestrator | 16:26:28.031 STDOUT terraform:  + id = (known after apply) 2025-05-28 16:26:28.032084 | orchestrator | 16:26:28.032 STDOUT terraform:  + metadata = (known after apply) 2025-05-28 16:26:28.032126 | orchestrator | 16:26:28.032 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-05-28 16:26:28.032162 | orchestrator | 16:26:28.032 STDOUT terraform:  + region = (known after apply) 2025-05-28 16:26:28.032186 | orchestrator | 16:26:28.032 STDOUT terraform:  + size = 20 2025-05-28 16:26:28.032216 | orchestrator | 16:26:28.032 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-28 16:26:28.032241 | orchestrator | 16:26:28.032 STDOUT terraform:  + volume_type = "ssd" 2025-05-28 16:26:28.032249 | orchestrator | 16:26:28.032 STDOUT terraform:  } 2025-05-28 16:26:28.032298 | orchestrator | 16:26:28.032 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-05-28 16:26:28.032341 | orchestrator | 16:26:28.032 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-28 16:26:28.032379 | orchestrator | 16:26:28.032 STDOUT terraform:  + attachment = (known after apply) 2025-05-28 16:26:28.032403 | orchestrator | 16:26:28.032 STDOUT terraform:  + availability_zone = "nova" 2025-05-28 16:26:28.032442 | orchestrator | 16:26:28.032 STDOUT terraform:  + id = (known after apply) 2025-05-28 16:26:28.032479 | orchestrator | 16:26:28.032 STDOUT terraform:  + metadata = (known after apply) 2025-05-28 16:26:28.032519 | orchestrator | 16:26:28.032 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-05-28 16:26:28.032586 | orchestrator | 16:26:28.032 STDOUT terraform:  + region = (known after apply) 2025-05-28 16:26:28.032594 | orchestrator | 16:26:28.032 STDOUT terraform:  + size = 20 2025-05-28 16:26:28.032624 | orchestrator | 16:26:28.032 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-28 16:26:28.032651 | orchestrator | 16:26:28.032 STDOUT terraform:  + volume_type = "ssd" 2025-05-28 16:26:28.032663 | orchestrator | 16:26:28.032 STDOUT terraform:  } 2025-05-28 16:26:28.032711 | orchestrator | 16:26:28.032 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-05-28 16:26:28.032753 | orchestrator | 16:26:28.032 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-28 16:26:28.032792 | orchestrator | 16:26:28.032 STDOUT terraform:  + attachment = (known after apply) 2025-05-28 16:26:28.032817 | orchestrator | 16:26:28.032 STDOUT terraform:  + availability_zone = "nova" 2025-05-28 16:26:28.032855 | orchestrator | 16:26:28.032 STDOUT terraform:  + id = (known after apply) 2025-05-28 16:26:28.032892 | orchestrator | 16:26:28.032 STDOUT terraform:  + metadata = (known after apply) 2025-05-28 16:26:28.032934 | orchestrator | 16:26:28.032 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-05-28 16:26:28.032991 | orchestrator | 16:26:28.032 STDOUT terraform:  + region = (known after apply) 2025-05-28 16:26:28.033028 | orchestrator | 16:26:28.032 STDOUT terraform:  + size = 20 2025-05-28 16:26:28.033062 | orchestrator | 16:26:28.033 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-28 16:26:28.033091 | orchestrator | 16:26:28.033 STDOUT terraform:  + volume_type = "ssd" 2025-05-28 16:26:28.033098 | orchestrator | 16:26:28.033 STDOUT terraform:  } 2025-05-28 16:26:28.033150 | orchestrator | 16:26:28.033 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-05-28 16:26:28.033194 | orchestrator | 16:26:28.033 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-28 16:26:28.033232 | orchestrator | 16:26:28.033 STDOUT terraform:  + attachment = (known after apply) 2025-05-28 16:26:28.033257 | orchestrator | 16:26:28.033 STDOUT terraform:  + availability_zone = "nova" 2025-05-28 16:26:28.033299 | orchestrator | 16:26:28.033 STDOUT terraform:  + id = (known after apply) 2025-05-28 16:26:28.033334 | orchestrator | 16:26:28.033 STDOUT terraform:  + metadata = (known after apply) 2025-05-28 16:26:28.033376 | orchestrator | 16:26:28.033 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-05-28 16:26:28.033416 | orchestrator | 16:26:28.033 STDOUT terraform:  + region = (known after apply) 2025-05-28 16:26:28.033439 | orchestrator | 16:26:28.033 STDOUT terraform:  + size = 20 2025-05-28 16:26:28.033467 | orchestrator | 16:26:28.033 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-28 16:26:28.033490 | orchestrator | 16:26:28.033 STDOUT terraform:  + volume_type = "ssd" 2025-05-28 16:26:28.033498 | orchestrator | 16:26:28.033 STDOUT terraform:  } 2025-05-28 16:26:28.033562 | orchestrator | 16:26:28.033 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-05-28 16:26:28.033599 | orchestrator | 16:26:28.033 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-28 16:26:28.033636 | orchestrator | 16:26:28.033 STDOUT terraform:  + attachment = (known after apply) 2025-05-28 16:26:28.033663 | orchestrator | 16:26:28.033 STDOUT terraform:  + availability_zone = "nova" 2025-05-28 16:26:28.033702 | orchestrator | 16:26:28.033 STDOUT terraform:  + id = (known after apply) 2025-05-28 16:26:28.033739 | orchestrator | 16:26:28.033 STDOUT terraform:  + metadata = (known after apply) 2025-05-28 16:26:28.033780 | orchestrator | 16:26:28.033 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-05-28 16:26:28.033817 | orchestrator | 16:26:28.033 STDOUT terraform:  + region = (known after apply) 2025-05-28 16:26:28.033842 | orchestrator | 16:26:28.033 STDOUT terraform:  + size = 20 2025-05-28 16:26:28.033868 | orchestrator | 16:26:28.033 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-28 16:26:28.033893 | orchestrator | 16:26:28.033 STDOUT terraform:  + volume_type = "ssd" 2025-05-28 16:26:28.033900 | orchestrator | 16:26:28.033 STDOUT terraform:  } 2025-05-28 16:26:28.033955 | orchestrator | 16:26:28.033 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-05-28 16:26:28.034002 | orchestrator | 16:26:28.033 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-28 16:26:28.034058 | orchestrator | 16:26:28.033 STDOUT terraform:  + attachment = (known after apply) 2025-05-28 16:26:28.034080 | orchestrator | 16:26:28.034 STDOUT terraform:  + availability_zone = "nova" 2025-05-28 16:26:28.034118 | orchestrator | 16:26:28.034 STDOUT terraform:  + id = (known after apply) 2025-05-28 16:26:28.034154 | orchestrator | 16:26:28.034 STDOUT terraform:  + metadata = (known after apply) 2025-05-28 16:26:28.034196 | orchestrator | 16:26:28.034 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-05-28 16:26:28.034233 | orchestrator | 16:26:28.034 STDOUT terraform:  + region = (known after apply) 2025-05-28 16:26:28.034256 | orchestrator | 16:26:28.034 STDOUT terraform:  + size = 20 2025-05-28 16:26:28.034281 | orchestrator | 16:26:28.034 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-28 16:26:28.034307 | orchestrator | 16:26:28.034 STDOUT terraform:  + volume_type = "ssd" 2025-05-28 16:26:28.034314 | orchestrator | 16:26:28.034 STDOUT terraform:  } 2025-05-28 16:26:28.034366 | orchestrator | 16:26:28.034 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-05-28 16:26:28.034413 | orchestrator | 16:26:28.034 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-05-28 16:26:28.034449 | orchestrator | 16:26:28.034 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-28 16:26:28.034486 | orchestrator | 16:26:28.034 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-28 16:26:28.034522 | orchestrator | 16:26:28.034 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-28 16:26:28.034629 | orchestrator | 16:26:28.034 STDOUT terraform:  + all_tags = (known after apply) 2025-05-28 16:26:28.034663 | orchestrator | 16:26:28.034 STDOUT terraform:  + availability_zone = "nova" 2025-05-28 16:26:28.034688 | orchestrator | 16:26:28.034 STDOUT terraform:  + config_drive = true 2025-05-28 16:26:28.034727 | orchestrator | 16:26:28.034 STDOUT terraform:  + created = (known after apply) 2025-05-28 16:26:28.034764 | orchestrator | 16:26:28.034 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-28 16:26:28.034796 | orchestrator | 16:26:28.034 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-05-28 16:26:28.034821 | orchestrator | 16:26:28.034 STDOUT terraform:  + force_delete = false 2025-05-28 16:26:28.034858 | orchestrator | 16:26:28.034 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-05-28 16:26:28.034896 | orchestrator | 16:26:28.034 STDOUT terraform:  + id = (known after apply) 2025-05-28 16:26:28.034947 | orchestrator | 16:26:28.034 STDOUT terraform:  + image_id = (known after apply) 2025-05-28 16:26:28.034975 | orchestrator | 16:26:28.034 STDOUT terraform:  + image_name = (known after apply) 2025-05-28 16:26:28.035001 | orchestrator | 16:26:28.034 STDOUT terraform:  + key_pair = "testbed" 2025-05-28 16:26:28.035033 | orchestrator | 16:26:28.034 STDOUT terraform:  + name = "testbed-manager" 2025-05-28 16:26:28.035059 | orchestrator | 16:26:28.035 STDOUT terraform:  + power_state = "active" 2025-05-28 16:26:28.035096 | orchestrator | 16:26:28.035 STDOUT terraform:  + region = (known after apply) 2025-05-28 16:26:28.035132 | orchestrator | 16:26:28.035 STDOUT terraform:  + security_groups = (known after apply) 2025-05-28 16:26:28.035158 | orchestrator | 16:26:28.035 STDOUT terraform:  + stop_before_destroy = false 2025-05-28 16:26:28.035194 | orchestrator | 16:26:28.035 STDOUT terraform:  + updated = (known after apply) 2025-05-28 16:26:28.035231 | orchestrator | 16:26:28.035 STDOUT terraform:  + user_data = (known after apply) 2025-05-28 16:26:28.035250 | orchestrator | 16:26:28.035 STDOUT terraform:  + block_device { 2025-05-28 16:26:28.035276 | orchestrator | 16:26:28.035 STDOUT terraform:  + boot_index = 0 2025-05-28 16:26:28.035305 | orchestrator | 16:26:28.035 STDOUT terraform:  + delete_on_termination = false 2025-05-28 16:26:28.035337 | orchestrator | 16:26:28.035 STDOUT terraform:  + destination_type = "volume" 2025-05-28 16:26:28.035366 | orchestrator | 16:26:28.035 STDOUT terraform:  + multiattach = false 2025-05-28 16:26:28.035399 | orchestrator | 16:26:28.035 STDOUT terraform:  + source_type = "volume" 2025-05-28 16:26:28.035441 | orchestrator | 16:26:28.035 STDOUT terraform:  + uuid = (known after apply) 2025-05-28 16:26:28.035448 | orchestrator | 16:26:28.035 STDOUT terraform:  } 2025-05-28 16:26:28.035468 | orchestrator | 16:26:28.035 STDOUT terraform:  + network { 2025-05-28 16:26:28.035491 | orchestrator | 16:26:28.035 STDOUT terraform:  + access_network = false 2025-05-28 16:26:28.035524 | orchestrator | 16:26:28.035 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-28 16:26:28.035571 | orchestrator | 16:26:28.035 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-28 16:26:28.035604 | orchestrator | 16:26:28.035 STDOUT terraform:  + mac = (known after apply) 2025-05-28 16:26:28.035636 | orchestrator | 16:26:28.035 STDOUT terraform:  + name = (known after apply) 2025-05-28 16:26:28.035671 | orchestrator | 16:26:28.035 STDOUT terraform:  + port = (known after apply) 2025-05-28 16:26:28.035705 | orchestrator | 16:26:28.035 STDOUT terraform:  + uuid = (known after apply) 2025-05-28 16:26:28.035716 | orchestrator | 16:26:28.035 STDOUT terraform:  } 2025-05-28 16:26:28.035723 | orchestrator | 16:26:28.035 STDOUT terraform:  } 2025-05-28 16:26:28.035770 | orchestrator | 16:26:28.035 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-05-28 16:26:28.035814 | orchestrator | 16:26:28.035 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-28 16:26:28.035850 | orchestrator | 16:26:28.035 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-28 16:26:28.035885 | orchestrator | 16:26:28.035 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-28 16:26:28.035923 | orchestrator | 16:26:28.035 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-28 16:26:28.035959 | orchestrator | 16:26:28.035 STDOUT terraform:  + all_tags = (known after apply) 2025-05-28 16:26:28.035985 | orchestrator | 16:26:28.035 STDOUT terraform:  + availability_zone = "nova" 2025-05-28 16:26:28.036007 | orchestrator | 16:26:28.035 STDOUT terraform:  + config_drive = true 2025-05-28 16:26:28.036043 | orchestrator | 16:26:28.036 STDOUT terraform:  + created = (known after apply) 2025-05-28 16:26:28.036081 | orchestrator | 16:26:28.036 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-28 16:26:28.036112 | orchestrator | 16:26:28.036 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-28 16:26:28.036138 | orchestrator | 16:26:28.036 STDOUT terraform:  + force_delete = false 2025-05-28 16:26:28.036175 | orchestrator | 16:26:28.036 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-05-28 16:26:28.036212 | orchestrator | 16:26:28.036 STDOUT terraform:  + id = (known after apply) 2025-05-28 16:26:28.036248 | orchestrator | 16:26:28.036 STDOUT terraform:  + image_id = (known after apply) 2025-05-28 16:26:28.036284 | orchestrator | 16:26:28.036 STDOUT terraform:  + image_name = (known after apply) 2025-05-28 16:26:28.036311 | orchestrator | 16:26:28.036 STDOUT terraform:  + key_pair = "testbed" 2025-05-28 16:26:28.036343 | orchestrator | 16:26:28.036 STDOUT terraform:  + name = "testbed-node-0" 2025-05-28 16:26:28.036370 | orchestrator | 16:26:28.036 STDOUT terraform:  + power_state = "active" 2025-05-28 16:26:28.036412 | orchestrator | 16:26:28.036 STDOUT terraform:  + region = (known after apply) 2025-05-28 16:26:28.036444 | orchestrator | 16:26:28.036 STDOUT terraform:  + security_groups = (known after apply) 2025-05-28 16:26:28.036468 | orchestrator | 16:26:28.036 STDOUT terraform:  + stop_before_destroy = false 2025-05-28 16:26:28.036506 | orchestrator | 16:26:28.036 STDOUT terraform:  + updated = (known after apply) 2025-05-28 16:26:28.036610 | orchestrator | 16:26:28.036 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-28 16:26:28.036619 | orchestrator | 16:26:28.036 STDOUT terraform:  + block_device { 2025-05-28 16:26:28.036650 | orchestrator | 16:26:28.036 STDOUT terraform:  + boot_index = 0 2025-05-28 16:26:28.036680 | orchestrator | 16:26:28.036 STDOUT terraform:  + delete_on_termination = false 2025-05-28 16:26:28.036712 | orchestrator | 16:26:28.036 STDOUT terraform:  + destination_type = "volume" 2025-05-28 16:26:28.036743 | orchestrator | 16:26:28.036 STDOUT terraform:  + multiattach = false 2025-05-28 16:26:28.036777 | orchestrator | 16:26:28.036 STDOUT terraform:  + source_type = "volume" 2025-05-28 16:26:28.036817 | orchestrator | 16:26:28.036 STDOUT terraform:  + uuid = (known after apply) 2025-05-28 16:26:28.036824 | orchestrator | 16:26:28.036 STDOUT terraform:  } 2025-05-28 16:26:28.036845 | orchestrator | 16:26:28.036 STDOUT terraform:  + network { 2025-05-28 16:26:28.036868 | orchestrator | 16:26:28.036 STDOUT terraform:  + access_network = false 2025-05-28 16:26:28.036903 | orchestrator | 16:26:28.036 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-28 16:26:28.036935 | orchestrator | 16:26:28.036 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-28 16:26:28.036969 | orchestrator | 16:26:28.036 STDOUT terraform:  + mac = (known after apply) 2025-05-28 16:26:28.037002 | orchestrator | 16:26:28.036 STDOUT terraform:  + name = (known after apply) 2025-05-28 16:26:28.037035 | orchestrator | 16:26:28.036 STDOUT terraform:  + port = (known after apply) 2025-05-28 16:26:28.037069 | orchestrator | 16:26:28.037 STDOUT terraform:  + uuid = (known after apply) 2025-05-28 16:26:28.037077 | orchestrator | 16:26:28.037 STDOUT terraform:  } 2025-05-28 16:26:28.037098 | orchestrator | 16:26:28.037 STDOUT terraform:  } 2025-05-28 16:26:28.037144 | orchestrator | 16:26:28.037 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-05-28 16:26:28.037185 | orchestrator | 16:26:28.037 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-28 16:26:28.037221 | orchestrator | 16:26:28.037 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-28 16:26:28.037260 | orchestrator | 16:26:28.037 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-28 16:26:28.037296 | orchestrator | 16:26:28.037 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-28 16:26:28.037333 | orchestrator | 16:26:28.037 STDOUT terraform:  + all_tags = (known after apply) 2025-05-28 16:26:28.037361 | orchestrator | 16:26:28.037 STDOUT terraform:  + availability_zone = "nova" 2025-05-28 16:26:28.037383 | orchestrator | 16:26:28.037 STDOUT terraform:  + config_drive = true 2025-05-28 16:26:28.037419 | orchestrator | 16:26:28.037 STDOUT terraform:  + created = (known after apply) 2025-05-28 16:26:28.037456 | orchestrator | 16:26:28.037 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-28 16:26:28.037487 | orchestrator | 16:26:28.037 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-28 16:26:28.037513 | orchestrator | 16:26:28.037 STDOUT terraform:  + force_delete = false 2025-05-28 16:26:28.037575 | orchestrator | 16:26:28.037 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-05-28 16:26:28.037585 | orchestrator | 16:26:28.037 STDOUT terraform:  + id = (known after apply) 2025-05-28 16:26:28.037629 | orchestrator | 16:26:28.037 STDOUT terraform:  + image_id = (known after apply) 2025-05-28 16:26:28.037665 | orchestrator | 16:26:28.037 STDOUT terraform:  + image_name = (known after apply) 2025-05-28 16:26:28.037693 | orchestrator | 16:26:28.037 STDOUT terraform:  + key_pair = "testbed" 2025-05-28 16:26:28.037725 | orchestrator | 16:26:28.037 STDOUT terraform:  + name = "testbed-node-1" 2025-05-28 16:26:28.037752 | orchestrator | 16:26:28.037 STDOUT terraform:  + power_state = "active" 2025-05-28 16:26:28.037788 | orchestrator | 16:26:28.037 STDOUT terraform:  + region = (known after apply) 2025-05-28 16:26:28.037824 | orchestrator | 16:26:28.037 STDOUT terraform:  + security_groups = (known after apply) 2025-05-28 16:26:28.037850 | orchestrator | 16:26:28.037 STDOUT terraform:  + stop_before_destroy = false 2025-05-28 16:26:28.037887 | orchestrator | 16:26:28.037 STDOUT terraform:  + updated = (known after apply) 2025-05-28 16:26:28.037938 | orchestrator | 16:26:28.037 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-28 16:26:28.037957 | orchestrator | 16:26:28.037 STDOUT terraform:  + block_device { 2025-05-28 16:26:28.037983 | orchestrator | 16:26:28.037 STDOUT terraform:  + boot_index = 0 2025-05-28 16:26:28.038031 | orchestrator | 16:26:28.037 STDOUT terraform:  + delete_on_termination = false 2025-05-28 16:26:28.038069 | orchestrator | 16:26:28.038 STDOUT terraform:  + destination_type = "volume" 2025-05-28 16:26:28.038099 | orchestrator | 16:26:28.038 STDOUT terraform:  + multiattach = false 2025-05-28 16:26:28.038136 | orchestrator | 16:26:28.038 STDOUT terraform:  + source_type = "volume" 2025-05-28 16:26:28.038172 | orchestrator | 16:26:28.038 STDOUT terraform:  + uuid = (known after apply) 2025-05-28 16:26:28.038179 | orchestrator | 16:26:28.038 STDOUT terraform:  } 2025-05-28 16:26:28.038201 | orchestrator | 16:26:28.038 STDOUT terraform:  + network { 2025-05-28 16:26:28.038227 | orchestrator | 16:26:28.038 STDOUT terraform:  + access_network = false 2025-05-28 16:26:28.038261 | orchestrator | 16:26:28.038 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-28 16:26:28.038292 | orchestrator | 16:26:28.038 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-28 16:26:28.038327 | orchestrator | 16:26:28.038 STDOUT terraform:  + mac = (known after apply) 2025-05-28 16:26:28.038361 | orchestrator | 16:26:28.038 STDOUT terraform:  + name = (known after apply) 2025-05-28 16:26:28.038392 | orchestrator | 16:26:28.038 STDOUT terraform:  + port = (known after apply) 2025-05-28 16:26:28.038426 | orchestrator | 16:26:28.038 STDOUT terraform:  + uuid = (known after apply) 2025-05-28 16:26:28.038433 | orchestrator | 16:26:28.038 STDOUT terraform:  } 2025-05-28 16:26:28.038455 | orchestrator | 16:26:28.038 STDOUT terraform:  } 2025-05-28 16:26:28.038501 | orchestrator | 16:26:28.038 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-05-28 16:26:28.038556 | orchestrator | 16:26:28.038 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-28 16:26:28.038590 | orchestrator | 16:26:28.038 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-28 16:26:28.038627 | orchestrator | 16:26:28.038 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-28 16:26:28.038662 | orchestrator | 16:26:28.038 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-28 16:26:28.038699 | orchestrator | 16:26:28.038 STDOUT terraform:  + all_tags = (known after apply) 2025-05-28 16:26:28.038724 | orchestrator | 16:26:28.038 STDOUT terraform:  + availability_zone = "nova" 2025-05-28 16:26:28.038747 | orchestrator | 16:26:28.038 STDOUT terraform:  + config_drive = true 2025-05-28 16:26:28.038784 | orchestrator | 16:26:28.038 STDOUT terraform:  + created = (known after apply) 2025-05-28 16:26:28.038821 | orchestrator | 16:26:28.038 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-28 16:26:28.038853 | orchestrator | 16:26:28.038 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-28 16:26:28.038880 | orchestrator | 16:26:28.038 STDOUT terraform:  + force_delete = false 2025-05-28 16:26:28.038914 | orchestrator | 16:26:28.038 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-05-28 16:26:28.038952 | orchestrator | 16:26:28.038 STDOUT terraform:  + id = (known after apply) 2025-05-28 16:26:28.038989 | orchestrator | 16:26:28.038 STDOUT terraform:  + image_id = (known after apply) 2025-05-28 16:26:28.039026 | orchestrator | 16:26:28.038 STDOUT terraform:  + image_name = (known after apply) 2025-05-28 16:26:28.039053 | orchestrator | 16:26:28.039 STDOUT terraform:  + key_pair = "testbed" 2025-05-28 16:26:28.039085 | orchestrator | 16:26:28.039 STDOUT terraform:  + name = "testbed-node-2" 2025-05-28 16:26:28.039113 | orchestrator | 16:26:28.039 STDOUT terraform:  + power_state = "active" 2025-05-28 16:26:28.039149 | orchestrator | 16:26:28.039 STDOUT terraform:  + region = (known after apply) 2025-05-28 16:26:28.039186 | orchestrator | 16:26:28.039 STDOUT terraform:  + security_groups = (known after apply) 2025-05-28 16:26:28.039212 | orchestrator | 16:26:28.039 STDOUT terraform:  + stop_before_destroy = false 2025-05-28 16:26:28.039249 | orchestrator | 16:26:28.039 STDOUT terraform:  + updated = (known after apply) 2025-05-28 16:26:28.039299 | orchestrator | 16:26:28.039 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-28 16:26:28.039309 | orchestrator | 16:26:28.039 STDOUT terraform:  + block_device { 2025-05-28 16:26:28.039342 | orchestrator | 16:26:28.039 STDOUT terraform:  + boot_index = 0 2025-05-28 16:26:28.039370 | orchestrator | 16:26:28.039 STDOUT terraform:  + delete_on_termination = false 2025-05-28 16:26:28.039400 | orchestrator | 16:26:28.039 STDOUT terraform:  + destination_type = "volume" 2025-05-28 16:26:28.039431 | orchestrator | 16:26:28.039 STDOUT terraform:  + multiattach = false 2025-05-28 16:26:28.039463 | orchestrator | 16:26:28.039 STDOUT terraform:  + source_type = "volume" 2025-05-28 16:26:28.039503 | orchestrator | 16:26:28.039 STDOUT terraform:  + uuid = (known after apply) 2025-05-28 16:26:28.039510 | orchestrator | 16:26:28.039 STDOUT terraform:  } 2025-05-28 16:26:28.039531 | orchestrator | 16:26:28.039 STDOUT terraform:  + network { 2025-05-28 16:26:28.039580 | orchestrator | 16:26:28.039 STDOUT terraform:  + access_network = false 2025-05-28 16:26:28.039612 | orchestrator | 16:26:28.039 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-28 16:26:28.039645 | orchestrator | 16:26:28.039 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-28 16:26:28.039676 | orchestrator | 16:26:28.039 STDOUT terraform:  + mac = (known after apply) 2025-05-28 16:26:28.039711 | orchestrator | 16:26:28.039 STDOUT terraform:  + name = (known after apply) 2025-05-28 16:26:28.039745 | orchestrator | 16:26:28.039 STDOUT terraform:  + port = (known after apply) 2025-05-28 16:26:28.039777 | orchestrator | 16:26:28.039 STDOUT terraform:  + uuid = (known after apply) 2025-05-28 16:26:28.039785 | orchestrator | 16:26:28.039 STDOUT terraform:  } 2025-05-28 16:26:28.039805 | orchestrator | 16:26:28.039 STDOUT terraform:  } 2025-05-28 16:26:28.039849 | orchestrator | 16:26:28.039 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-05-28 16:26:28.039894 | orchestrator | 16:26:28.039 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-28 16:26:28.039928 | orchestrator | 16:26:28.039 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-28 16:26:28.039965 | orchestrator | 16:26:28.039 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-28 16:26:28.040007 | orchestrator | 16:26:28.039 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-28 16:26:28.040040 | orchestrator | 16:26:28.039 STDOUT terraform:  + all_tags = (known after apply) 2025-05-28 16:26:28.040068 | orchestrator | 16:26:28.040 STDOUT terraform:  + availability_zone = "nova" 2025-05-28 16:26:28.040091 | orchestrator | 16:26:28.040 STDOUT terraform:  + config_drive = true 2025-05-28 16:26:28.040127 | orchestrator | 16:26:28.040 STDOUT terraform:  + created = (known after apply) 2025-05-28 16:26:28.040169 | orchestrator | 16:26:28.040 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-28 16:26:28.040194 | orchestrator | 16:26:28.040 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-28 16:26:28.040221 | orchestrator | 16:26:28.040 STDOUT terraform:  + force_delete = false 2025-05-28 16:26:28.040254 | orchestrator | 16:26:28.040 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-05-28 16:26:28.040295 | orchestrator | 16:26:28.040 STDOUT terraform:  + id = (known after apply) 2025-05-28 16:26:28.040332 | orchestrator | 16:26:28.040 STDOUT terraform:  + image_id = (known after apply) 2025-05-28 16:26:28.040369 | orchestrator | 16:26:28.040 STDOUT terraform:  + image_name = (known after apply) 2025-05-28 16:26:28.040396 | orchestrator | 16:26:28.040 STDOUT terraform:  + key_pair = "testbed" 2025-05-28 16:26:28.040429 | orchestrator | 16:26:28.040 STDOUT terraform:  + name = "testbed-node-3" 2025-05-28 16:26:28.040456 | orchestrator | 16:26:28.040 STDOUT terraform:  + power_state = "active" 2025-05-28 16:26:28.040493 | orchestrator | 16:26:28.040 STDOUT terraform:  + region = (known after apply) 2025-05-28 16:26:28.040525 | orchestrator | 16:26:28.040 STDOUT terraform:  + security_groups = (known after apply) 2025-05-28 16:26:28.040585 | orchestrator | 16:26:28.040 STDOUT terraform:  + stop_before_destroy = false 2025-05-28 16:26:28.040604 | orchestrator | 16:26:28.040 STDOUT terraform:  + updated = (known after apply) 2025-05-28 16:26:28.040647 | orchestrator | 16:26:28.040 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-28 16:26:28.040655 | orchestrator | 16:26:28.040 STDOUT terraform:  + block_device { 2025-05-28 16:26:28.040686 | orchestrator | 16:26:28.040 STDOUT terraform:  + boot_index = 0 2025-05-28 16:26:28.040714 | orchestrator | 16:26:28.040 STDOUT terraform:  + delete_on_termination = false 2025-05-28 16:26:28.040746 | orchestrator | 16:26:28.040 STDOUT terraform:  + destination_type = "volume" 2025-05-28 16:26:28.040776 | orchestrator | 16:26:28.040 STDOUT terraform:  + multiattach = false 2025-05-28 16:26:28.040806 | orchestrator | 16:26:28.040 STDOUT terraform:  + source_type = "volume" 2025-05-28 16:26:28.040846 | orchestrator | 16:26:28.040 STDOUT terraform:  + uuid = (known after apply) 2025-05-28 16:26:28.040854 | orchestrator | 16:26:28.040 STDOUT terraform:  } 2025-05-28 16:26:28.040873 | orchestrator | 16:26:28.040 STDOUT terraform:  + network { 2025-05-28 16:26:28.040896 | orchestrator | 16:26:28.040 STDOUT terraform:  + access_network = false 2025-05-28 16:26:28.040929 | orchestrator | 16:26:28.040 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-28 16:26:28.040962 | orchestrator | 16:26:28.040 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-28 16:26:28.040997 | orchestrator | 16:26:28.040 STDOUT terraform:  + mac = (known after apply) 2025-05-28 16:26:28.041029 | orchestrator | 16:26:28.040 STDOUT terraform:  + name = (known after apply) 2025-05-28 16:26:28.041109 | orchestrator | 16:26:28.041 STDOUT terraform:  + port = (known after apply) 2025-05-28 16:26:28.041149 | orchestrator | 16:26:28.041 STDOUT terraform:  + uuid = (known after apply) 2025-05-28 16:26:28.041156 | orchestrator | 16:26:28.041 STDOUT terraform:  } 2025-05-28 16:26:28.041179 | orchestrator | 16:26:28.041 STDOUT terraform:  } 2025-05-28 16:26:28.041223 | orchestrator | 16:26:28.041 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-05-28 16:26:28.041266 | orchestrator | 16:26:28.041 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-28 16:26:28.041302 | orchestrator | 16:26:28.041 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-28 16:26:28.041340 | orchestrator | 16:26:28.041 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-28 16:26:28.041376 | orchestrator | 16:26:28.041 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-28 16:26:28.041415 | orchestrator | 16:26:28.041 STDOUT terraform:  + all_tags = (known after apply) 2025-05-28 16:26:28.041441 | orchestrator | 16:26:28.041 STDOUT terraform:  + availability_zone = "nova" 2025-05-28 16:26:28.041464 | orchestrator | 16:26:28.041 STDOUT terraform:  + config_drive = true 2025-05-28 16:26:28.041502 | orchestrator | 16:26:28.041 STDOUT terraform:  + created = (known after apply) 2025-05-28 16:26:28.041538 | orchestrator | 16:26:28.041 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-28 16:26:28.041574 | orchestrator | 16:26:28.041 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-28 16:26:28.041610 | orchestrator | 16:26:28.041 STDOUT terraform:  + force_delete = false 2025-05-28 16:26:28.041642 | orchestrator | 16:26:28.041 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-05-28 16:26:28.041680 | orchestrator | 16:26:28.041 STDOUT terraform:  + id = (known after apply) 2025-05-28 16:26:28.041716 | orchestrator | 16:26:28.041 STDOUT terraform:  + image_id = (known after apply) 2025-05-28 16:26:28.041755 | orchestrator | 16:26:28.041 STDOUT terraform:  + image_name = (known after apply) 2025-05-28 16:26:28.041778 | orchestrator | 16:26:28.041 STDOUT terraform:  + key_pair = "testbed" 2025-05-28 16:26:28.041810 | orchestrator | 16:26:28.041 STDOUT terraform:  + name = "testbed-node-4" 2025-05-28 16:26:28.041835 | orchestrator | 16:26:28.041 STDOUT terraform:  + power_state = "active" 2025-05-28 16:26:28.041873 | orchestrator | 16:26:28.041 STDOUT terraform:  + region = (known after apply) 2025-05-28 16:26:28.041920 | orchestrator | 16:26:28.041 STDOUT terraform:  + security_groups = (known after apply) 2025-05-28 16:26:28.041949 | orchestrator | 16:26:28.041 STDOUT terraform:  + stop_before_destroy = false 2025-05-28 16:26:28.041982 | orchestrator | 16:26:28.041 STDOUT terraform:  + updated = (known after apply) 2025-05-28 16:26:28.042049 | orchestrator | 16:26:28.041 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-28 16:26:28.042059 | orchestrator | 16:26:28.042 STDOUT terraform:  + block_device { 2025-05-28 16:26:28.042090 | orchestrator | 16:26:28.042 STDOUT terraform:  + boot_index = 0 2025-05-28 16:26:28.042118 | orchestrator | 16:26:28.042 STDOUT terraform:  + delete_on_termination = false 2025-05-28 16:26:28.042149 | orchestrator | 16:26:28.042 STDOUT terraform:  + destination_type = "volume" 2025-05-28 16:26:28.042187 | orchestrator | 16:26:28.042 STDOUT terraform:  + multiattach = false 2025-05-28 16:26:28.042213 | orchestrator | 16:26:28.042 STDOUT terraform:  + source_type = "volume" 2025-05-28 16:26:28.042254 | orchestrator | 16:26:28.042 STDOUT terraform:  + uuid = (known after apply) 2025-05-28 16:26:28.042262 | orchestrator | 16:26:28.042 STDOUT terraform:  } 2025-05-28 16:26:28.042282 | orchestrator | 16:26:28.042 STDOUT terraform:  + network { 2025-05-28 16:26:28.042304 | orchestrator | 16:26:28.042 STDOUT terraform:  + access_network = false 2025-05-28 16:26:28.042339 | orchestrator | 16:26:28.042 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-28 16:26:28.042372 | orchestrator | 16:26:28.042 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-28 16:26:28.042407 | orchestrator | 16:26:28.042 STDOUT terraform:  + mac = (known after apply) 2025-05-28 16:26:28.042439 | orchestrator | 16:26:28.042 STDOUT terraform:  + name = (known after apply) 2025-05-28 16:26:28.042473 | orchestrator | 16:26:28.042 STDOUT terraform:  + port = (known after apply) 2025-05-28 16:26:28.042506 | orchestrator | 16:26:28.042 STDOUT terraform:  + uuid = (known after apply) 2025-05-28 16:26:28.042523 | orchestrator | 16:26:28.042 STDOUT terraform:  } 2025-05-28 16:26:28.042527 | orchestrator | 16:26:28.042 STDOUT terraform:  } 2025-05-28 16:26:28.042645 | orchestrator | 16:26:28.042 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-05-28 16:26:28.042686 | orchestrator | 16:26:28.042 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-28 16:26:28.042723 | orchestrator | 16:26:28.042 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-28 16:26:28.042758 | orchestrator | 16:26:28.042 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-28 16:26:28.042794 | orchestrator | 16:26:28.042 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-28 16:26:28.042831 | orchestrator | 16:26:28.042 STDOUT terraform:  + all_tags = (known after apply) 2025-05-28 16:26:28.042858 | orchestrator | 16:26:28.042 STDOUT terraform:  + availability_zone = "nova" 2025-05-28 16:26:28.042882 | orchestrator | 16:26:28.042 STDOUT terraform:  + config_drive = true 2025-05-28 16:26:28.042919 | orchestrator | 16:26:28.042 STDOUT terraform:  + created = (known after apply) 2025-05-28 16:26:28.042957 | orchestrator | 16:26:28.042 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-28 16:26:28.042989 | orchestrator | 16:26:28.042 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-28 16:26:28.043013 | orchestrator | 16:26:28.042 STDOUT terraform:  + force_delete = false 2025-05-28 16:26:28.043049 | orchestrator | 16:26:28.043 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-05-28 16:26:28.043087 | orchestrator | 16:26:28.043 STDOUT terraform:  + id = (known after apply) 2025-05-28 16:26:28.043128 | orchestrator | 16:26:28.043 STDOUT terraform:  + image_id = (known after apply) 2025-05-28 16:26:28.043161 | orchestrator | 16:26:28.043 STDOUT terraform:  + image_name = (known after apply) 2025-05-28 16:26:28.043193 | orchestrator | 16:26:28.043 STDOUT terraform:  + key_pair = "testbed" 2025-05-28 16:26:28.043220 | orchestrator | 16:26:28.043 STDOUT terraform:  + name = "testbed-node-5" 2025-05-28 16:26:28.043246 | orchestrator | 16:26:28.043 STDOUT terraform:  + power_state = "active" 2025-05-28 16:26:28.043283 | orchestrator | 16:26:28.043 STDOUT terraform:  + region = (known after apply) 2025-05-28 16:26:28.043330 | orchestrator | 16:26:28.043 STDOUT terraform:  + security_groups = (known after apply) 2025-05-28 16:26:28.043359 | orchestrator | 16:26:28.043 STDOUT terraform:  + stop_before_destroy = false 2025-05-28 16:26:28.043393 | orchestrator | 16:26:28.043 STDOUT terraform:  + updated = (known after apply) 2025-05-28 16:26:28.043444 | orchestrator | 16:26:28.043 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-28 16:26:28.043467 | orchestrator | 16:26:28.043 STDOUT terraform:  + block_device { 2025-05-28 16:26:28.043492 | orchestrator | 16:26:28.043 STDOUT terraform:  + boot_index = 0 2025-05-28 16:26:28.043521 | orchestrator | 16:26:28.043 STDOUT terraform:  + delete_on_termination = false 2025-05-28 16:26:28.043583 | orchestrator | 16:26:28.043 STDOUT terraform:  + destination_type = "volume" 2025-05-28 16:26:28.043616 | orchestrator | 16:26:28.043 STDOUT terraform:  + multiattach = false 2025-05-28 16:26:28.043645 | orchestrator | 16:26:28.043 STDOUT terraform:  + source_type = "volume" 2025-05-28 16:26:28.043686 | orchestrator | 16:26:28.043 STDOUT terraform:  + uuid = (known after apply) 2025-05-28 16:26:28.043694 | orchestrator | 16:26:28.043 STDOUT terraform:  } 2025-05-28 16:26:28.043714 | orchestrator | 16:26:28.043 STDOUT terraform:  + network { 2025-05-28 16:26:28.043739 | orchestrator | 16:26:28.043 STDOUT terraform:  + access_network = false 2025-05-28 16:26:28.043771 | orchestrator | 16:26:28.043 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-28 16:26:28.043806 | orchestrator | 16:26:28.043 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-28 16:26:28.043840 | orchestrator | 16:26:28.043 STDOUT terraform:  + mac = (known after apply) 2025-05-28 16:26:28.043875 | orchestrator | 16:26:28.043 STDOUT terraform:  + name = (known after apply) 2025-05-28 16:26:28.043909 | orchestrator | 16:26:28.043 STDOUT terraform:  + port = (known after apply) 2025-05-28 16:26:28.043942 | orchestrator | 16:26:28.043 STDOUT terraform:  + uuid = (known after apply) 2025-05-28 16:26:28.043950 | orchestrator | 16:26:28.043 STDOUT terraform:  } 2025-05-28 16:26:28.043991 | orchestrator | 16:26:28.043 STDOUT terraform:  } 2025-05-28 16:26:28.043999 | orchestrator | 16:26:28.043 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-05-28 16:26:28.044037 | orchestrator | 16:26:28.043 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-05-28 16:26:28.044068 | orchestrator | 16:26:28.044 STDOUT terraform:  + fingerprint = (known after apply) 2025-05-28 16:26:28.044099 | orchestrator | 16:26:28.044 STDOUT terraform:  + id = (known after apply) 2025-05-28 16:26:28.044122 | orchestrator | 16:26:28.044 STDOUT terraform:  + name = "testbed" 2025-05-28 16:26:28.044150 | orchestrator | 16:26:28.044 STDOUT terraform:  + private_key = (sensitive value) 2025-05-28 16:26:28.044181 | orchestrator | 16:26:28.044 STDOUT terraform:  + public_key = (known after apply) 2025-05-28 16:26:28.044212 | orchestrator | 16:26:28.044 STDOUT terraform:  + region = (known after apply) 2025-05-28 16:26:28.044243 | orchestrator | 16:26:28.044 STDOUT terraform:  + user_id = (known after apply) 2025-05-28 16:26:28.044253 | orchestrator | 16:26:28.044 STDOUT terraform:  } 2025-05-28 16:26:28.044307 | orchestrator | 16:26:28.044 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-05-28 16:26:28.044357 | orchestrator | 16:26:28.044 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-28 16:26:28.044386 | orchestrator | 16:26:28.044 STDOUT terraform:  + device = (known after apply) 2025-05-28 16:26:28.044416 | orchestrator | 16:26:28.044 STDOUT terraform:  + id = (known after apply) 2025-05-28 16:26:28.044446 | orchestrator | 16:26:28.044 STDOUT terraform:  + instance_id = (known after apply) 2025-05-28 16:26:28.044475 | orchestrator | 16:26:28.044 STDOUT terraform:  + region = (known after apply) 2025-05-28 16:26:28.044504 | orchestrator | 16:26:28.044 STDOUT terraform:  + volume_id = (known after apply) 2025-05-28 16:26:28.044519 | orchestrator | 16:26:28.044 STDOUT terraform:  } 2025-05-28 16:26:28.044580 | orchestrator | 16:26:28.044 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-05-28 16:26:28.044634 | orchestrator | 16:26:28.044 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-28 16:26:28.044658 | orchestrator | 16:26:28.044 STDOUT terraform:  + device = (known after apply) 2025-05-28 16:26:28.044687 | orchestrator | 16:26:28.044 STDOUT terraform:  + id = (known after apply) 2025-05-28 16:26:28.044718 | orchestrator | 16:26:28.044 STDOUT terraform:  + instance_id = (known after apply) 2025-05-28 16:26:28.044749 | orchestrator | 16:26:28.044 STDOUT terraform:  + region = (known after apply) 2025-05-28 16:26:28.044779 | orchestrator | 16:26:28.044 STDOUT terraform:  + volume_id = (known after apply) 2025-05-28 16:26:28.044786 | orchestrator | 16:26:28.044 STDOUT terraform:  } 2025-05-28 16:26:28.044841 | orchestrator | 16:26:28.044 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-05-28 16:26:28.044891 | orchestrator | 16:26:28.044 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-28 16:26:28.044918 | orchestrator | 16:26:28.044 STDOUT terraform:  + device = (known after apply) 2025-05-28 16:26:28.044950 | orchestrator | 16:26:28.044 STDOUT terraform:  + id = (known after apply) 2025-05-28 16:26:28.044979 | orchestrator | 16:26:28.044 STDOUT terraform:  + instance_id = (known after apply) 2025-05-28 16:26:28.045008 | orchestrator | 16:26:28.044 STDOUT terraform:  + region = (known after apply) 2025-05-28 16:26:28.045039 | orchestrator | 16:26:28.045 STDOUT terraform:  + volume_id = (known after apply) 2025-05-28 16:26:28.045046 | orchestrator | 16:26:28.045 STDOUT terraform:  } 2025-05-28 16:26:28.045101 | orchestrator | 16:26:28.045 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-05-28 16:26:28.045151 | orchestrator | 16:26:28.045 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-28 16:26:28.045183 | orchestrator | 16:26:28.045 STDOUT terraform:  + device = (known after apply) 2025-05-28 16:26:28.045213 | orchestrator | 16:26:28.045 STDOUT terraform:  + id = (known after apply) 2025-05-28 16:26:28.045243 | orchestrator | 16:26:28.045 STDOUT terraform:  + instance_id = (known after apply) 2025-05-28 16:26:28.045274 | orchestrator | 16:26:28.045 STDOUT terraform:  + region = (known after apply) 2025-05-28 16:26:28.045302 | orchestrator | 16:26:28.045 STDOUT terraform:  + volume_id = (known after apply) 2025-05-28 16:26:28.045309 | orchestrator | 16:26:28.045 STDOUT terraform:  } 2025-05-28 16:26:28.045366 | orchestrator | 16:26:28.045 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-05-28 16:26:28.045415 | orchestrator | 16:26:28.045 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-28 16:26:28.045445 | orchestrator | 16:26:28.045 STDOUT terraform:  + device = (known after apply) 2025-05-28 16:26:28.045476 | orchestrator | 16:26:28.045 STDOUT terraform:  + id = (known after apply) 2025-05-28 16:26:28.045505 | orchestrator | 16:26:28.045 STDOUT terraform:  + instance_id = (known after apply) 2025-05-28 16:26:28.045535 | orchestrator | 16:26:28.045 STDOUT terraform:  + region = (known after apply) 2025-05-28 16:26:28.045596 | orchestrator | 16:26:28.045 STDOUT terraform:  + volume_id = (known after apply) 2025-05-28 16:26:28.045604 | orchestrator | 16:26:28.045 STDOUT terraform:  } 2025-05-28 16:26:28.045654 | orchestrator | 16:26:28.045 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-05-28 16:26:28.045703 | orchestrator | 16:26:28.045 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-28 16:26:28.045733 | orchestrator | 16:26:28.045 STDOUT terraform:  + device = (known after apply) 2025-05-28 16:26:28.045765 | orchestrator | 16:26:28.045 STDOUT terraform:  + id = (known after apply) 2025-05-28 16:26:28.045795 | orchestrator | 16:26:28.045 STDOUT terraform:  + instance_id = (known after apply) 2025-05-28 16:26:28.045824 | orchestrator | 16:26:28.045 STDOUT terraform:  + region = (known after apply) 2025-05-28 16:26:28.045855 | orchestrator | 16:26:28.045 STDOUT terraform:  + volume_id = (known after apply) 2025-05-28 16:26:28.045863 | orchestrator | 16:26:28.045 STDOUT terraform:  } 2025-05-28 16:26:28.045918 | orchestrator | 16:26:28.045 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-05-28 16:26:28.045966 | orchestrator | 16:26:28.045 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-28 16:26:28.045997 | orchestrator | 16:26:28.045 STDOUT terraform:  + device = (known after apply) 2025-05-28 16:26:28.048385 | orchestrator | 16:26:28.045 STDOUT terraform:  + id = (known after apply) 2025-05-28 16:26:28.048438 | orchestrator | 16:26:28.048 STDOUT terraform:  + instance_id = (known after apply) 2025-05-28 16:26:28.048471 | orchestrator | 16:26:28.048 STDOUT terraform:  + region = (known after apply) 2025-05-28 16:26:28.048499 | orchestrator | 16:26:28.048 STDOUT terraform:  + volume_id = (known after apply) 2025-05-28 16:26:28.048507 | orchestrator | 16:26:28.048 STDOUT terraform:  } 2025-05-28 16:26:28.048622 | orchestrator | 16:26:28.048 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-05-28 16:26:28.048657 | orchestrator | 16:26:28.048 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-28 16:26:28.048685 | orchestrator | 16:26:28.048 STDOUT terraform:  + device = (known after apply) 2025-05-28 16:26:28.048718 | orchestrator | 16:26:28.048 STDOUT terraform:  + id = (known after apply) 2025-05-28 16:26:28.048749 | orchestrator | 16:26:28.048 STDOUT terraform:  + instance_id = (known after apply) 2025-05-28 16:26:28.048780 | orchestrator | 16:26:28.048 STDOUT terraform:  + region = (known after apply) 2025-05-28 16:26:28.048811 | orchestrator | 16:26:28.048 STDOUT terraform:  + volume_id = (known after apply) 2025-05-28 16:26:28.048817 | orchestrator | 16:26:28.048 STDOUT terraform:  } 2025-05-28 16:26:28.048880 | orchestrator | 16:26:28.048 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-05-28 16:26:28.048934 | orchestrator | 16:26:28.048 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-28 16:26:28.048947 | orchestrator | 16:26:28.048 STDOUT terraform:  + device = (known after apply) 2025-05-28 16:26:28.048990 | orchestrator | 16:26:28.048 STDOUT terraform:  + id = (known after apply) 2025-05-28 16:26:28.049015 | orchestrator | 16:26:28.048 STDOUT terraform:  + instance_id = (known after apply) 2025-05-28 16:26:28.049039 | orchestrator | 16:26:28.049 STDOUT terraform:  + region = (known after apply) 2025-05-28 16:26:28.049069 | orchestrator | 16:26:28.049 STDOUT terraform:  + volume_id = (known after apply) 2025-05-28 16:26:28.049077 | orchestrator | 16:26:28.049 STDOUT terraform:  } 2025-05-28 16:26:28.049140 | orchestrator | 16:26:28.049 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-05-28 16:26:28.049203 | orchestrator | 16:26:28.049 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-05-28 16:26:28.049229 | orchestrator | 16:26:28.049 STDOUT terraform:  + fixed_ip = (known after apply) 2025-05-28 16:26:28.049260 | orchestrator | 16:26:28.049 STDOUT terraform:  + floating_ip = (known after apply) 2025-05-28 16:26:28.049289 | orchestrator | 16:26:28.049 STDOUT terraform:  + id = (known after apply) 2025-05-28 16:26:28.049319 | orchestrator | 16:26:28.049 STDOUT terraform:  + port_id = (known after apply) 2025-05-28 16:26:28.049349 | orchestrator | 16:26:28.049 STDOUT terraform:  + region = (known after apply) 2025-05-28 16:26:28.049356 | orchestrator | 16:26:28.049 STDOUT terraform:  } 2025-05-28 16:26:28.049413 | orchestrator | 16:26:28.049 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-05-28 16:26:28.049463 | orchestrator | 16:26:28.049 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-05-28 16:26:28.049488 | orchestrator | 16:26:28.049 STDOUT terraform:  + address = (known after apply) 2025-05-28 16:26:28.049515 | orchestrator | 16:26:28.049 STDOUT terraform:  + all_tags = (known after apply) 2025-05-28 16:26:28.049560 | orchestrator | 16:26:28.049 STDOUT terraform:  + dns_domain = (known after apply) 2025-05-28 16:26:28.049582 | orchestrator | 16:26:28.049 STDOUT terraform:  + dns_name = (known after apply) 2025-05-28 16:26:28.049608 | orchestrator | 16:26:28.049 STDOUT terraform:  + fixed_ip = (known after apply) 2025-05-28 16:26:28.049635 | orchestrator | 16:26:28.049 STDOUT terraform:  + id = (known after apply) 2025-05-28 16:26:28.049661 | orchestrator | 16:26:28.049 STDOUT terraform:  + pool = "public" 2025-05-28 16:26:28.049688 | orchestrator | 16:26:28.049 STDOUT terraform:  + port_id = (known after apply) 2025-05-28 16:26:28.049714 | orchestrator | 16:26:28.049 STDOUT terraform:  + region = (known after apply) 2025-05-28 16:26:28.049750 | orchestrator | 16:26:28.049 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-28 16:26:28.049761 | orchestrator | 16:26:28.049 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-28 16:26:28.049781 | orchestrator | 16:26:28.049 STDOUT terraform:  } 2025-05-28 16:26:28.049827 | orchestrator | 16:26:28.049 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-05-28 16:26:28.049870 | orchestrator | 16:26:28.049 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-05-28 16:26:28.049941 | orchestrator | 16:26:28.049 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-28 16:26:28.049978 | orchestrator | 16:26:28.049 STDOUT terraform:  + all_tags = (known after apply) 2025-05-28 16:26:28.050004 | orchestrator | 16:26:28.049 STDOUT terraform:  + availability_zone_hints = [ 2025-05-28 16:26:28.050037 | orchestrator | 16:26:28.049 STDOUT terraform:  + "nova", 2025-05-28 16:26:28.050045 | orchestrator | 16:26:28.050 STDOUT terraform:  ] 2025-05-28 16:26:28.050085 | orchestrator | 16:26:28.050 STDOUT terraform:  + dns_domain = (known after apply) 2025-05-28 16:26:28.050126 | orchestrator | 16:26:28.050 STDOUT terraform:  + external = (known after apply) 2025-05-28 16:26:28.050164 | orchestrator | 16:26:28.050 STDOUT terraform:  + id = (known after apply) 2025-05-28 16:26:28.050204 | orchestrator | 16:26:28.050 STDOUT terraform:  + mtu = (known after apply) 2025-05-28 16:26:28.050245 | orchestrator | 16:26:28.050 STDOUT terraform:  + name = "net-testbed-management" 2025-05-28 16:26:28.050282 | orchestrator | 16:26:28.050 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-28 16:26:28.050321 | orchestrator | 16:26:28.050 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-28 16:26:28.050361 | orchestrator | 16:26:28.050 STDOUT terraform:  + region = (known after apply) 2025-05-28 16:26:28.050400 | orchestrator | 16:26:28.050 STDOUT terraform:  + shared = (known after apply) 2025-05-28 16:26:28.050439 | orchestrator | 16:26:28.050 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-28 16:26:28.050478 | orchestrator | 16:26:28.050 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-05-28 16:26:28.050509 | orchestrator | 16:26:28.050 STDOUT terraform:  + segments (known after apply) 2025-05-28 16:26:28.050516 | orchestrator | 16:26:28.050 STDOUT terraform:  } 2025-05-28 16:26:28.050584 | orchestrator | 16:26:28.050 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-05-28 16:26:28.050632 | orchestrator | 16:26:28.050 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-05-28 16:26:28.050669 | orchestrator | 16:26:28.050 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-28 16:26:28.050706 | orchestrator | 16:26:28.050 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-28 16:26:28.050743 | orchestrator | 16:26:28.050 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-28 16:26:28.050782 | orchestrator | 16:26:28.050 STDOUT terraform:  + all_tags = (known after apply) 2025-05-28 16:26:28.050819 | orchestrator | 16:26:28.050 STDOUT terraform:  + device_id = (known after apply) 2025-05-28 16:26:28.050858 | orchestrator | 16:26:28.050 STDOUT terraform:  + device_owner = (known after apply) 2025-05-28 16:26:28.050896 | orchestrator | 16:26:28.050 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-28 16:26:28.050934 | orchestrator | 16:26:28.050 STDOUT terraform:  + dns_name = (known after apply) 2025-05-28 16:26:28.050975 | orchestrator | 16:26:28.050 STDOUT terraform:  + id = (known after apply) 2025-05-28 16:26:28.051014 | orchestrator | 16:26:28.050 STDOUT terraform:  + mac_address = (known after apply) 2025-05-28 16:26:28.051053 | orchestrator | 16:26:28.051 STDOUT terraform:  + network_id = (known after apply) 2025-05-28 16:26:28.051088 | orchestrator | 16:26:28.051 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-28 16:26:28.051127 | orchestrator | 16:26:28.051 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-28 16:26:28.051164 | orchestrator | 16:26:28.051 STDOUT terraform:  + region = (known after apply) 2025-05-28 16:26:28.051200 | orchestrator | 16:26:28.051 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-28 16:26:28.051239 | orchestrator | 16:26:28.051 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-28 16:26:28.051261 | orchestrator | 16:26:28.051 STDOUT terraform:  + allowed_address_pairs { 2025-05-28 16:26:28.051295 | orchestrator | 16:26:28.051 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-28 16:26:28.051302 | orchestrator | 16:26:28.051 STDOUT terraform:  } 2025-05-28 16:26:28.051331 | orchestrator | 16:26:28.051 STDOUT terraform:  + allowed_address_pairs { 2025-05-28 16:26:28.051362 | orchestrator | 16:26:28.051 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-28 16:26:28.051369 | orchestrator | 16:26:28.051 STDOUT terraform:  } 2025-05-28 16:26:28.051397 | orchestrator | 16:26:28.051 STDOUT terraform:  + binding (known after apply) 2025-05-28 16:26:28.051405 | orchestrator | 16:26:28.051 STDOUT terraform:  + fixed_ip { 2025-05-28 16:26:28.051436 | orchestrator | 16:26:28.051 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-05-28 16:26:28.051466 | orchestrator | 16:26:28.051 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-28 16:26:28.051473 | orchestrator | 16:26:28.051 STDOUT terraform:  } 2025-05-28 16:26:28.051493 | orchestrator | 16:26:28.051 STDOUT terraform:  } 2025-05-28 16:26:28.051540 | orchestrator | 16:26:28.051 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-05-28 16:26:28.051626 | orchestrator | 16:26:28.051 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-28 16:26:28.051665 | orchestrator | 16:26:28.051 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-28 16:26:28.051703 | orchestrator | 16:26:28.051 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-28 16:26:28.051740 | orchestrator | 16:26:28.051 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-28 16:26:28.051778 | orchestrator | 16:26:28.051 STDOUT terraform:  + all_tags = (known after apply) 2025-05-28 16:26:28.051818 | orchestrator | 16:26:28.051 STDOUT terraform:  + device_id = (known after apply) 2025-05-28 16:26:28.051858 | orchestrator | 16:26:28.051 STDOUT terraform:  + device_owner = (known after apply) 2025-05-28 16:26:28.051897 | orchestrator | 16:26:28.051 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-28 16:26:28.051936 | orchestrator | 16:26:28.051 STDOUT terraform:  + dns_name = (known after apply) 2025-05-28 16:26:28.051974 | orchestrator | 16:26:28.051 STDOUT terraform:  + id = (known after apply) 2025-05-28 16:26:28.052013 | orchestrator | 16:26:28.051 STDOUT terraform:  + mac_address = (known after apply) 2025-05-28 16:26:28.052051 | orchestrator | 16:26:28.052 STDOUT terraform:  + network_id = (known after apply) 2025-05-28 16:26:28.052089 | orchestrator | 16:26:28.052 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-28 16:26:28.052117 | orchestrator | 16:26:28.052 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-28 16:26:28.052156 | orchestrator | 16:26:28.052 STDOUT terraform:  + region = (known after apply) 2025-05-28 16:26:28.052195 | orchestrator | 16:26:28.052 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-28 16:26:28.052230 | orchestrator | 16:26:28.052 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-28 16:26:28.052251 | orchestrator | 16:26:28.052 STDOUT terraform:  + allowed_address_pairs { 2025-05-28 16:26:28.052283 | orchestrator | 16:26:28.052 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-28 16:26:28.052290 | orchestrator | 16:26:28.052 STDOUT terraform:  } 2025-05-28 16:26:28.052325 | orchestrator | 16:26:28.052 STDOUT terraform:  + allowed_address_pairs { 2025-05-28 16:26:28.052354 | orchestrator | 16:26:28.052 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-28 16:26:28.052361 | orchestrator | 16:26:28.052 STDOUT terraform:  } 2025-05-28 16:26:28.052383 | orchestrator | 16:26:28.052 STDOUT terraform:  + allowed_address_pairs { 2025-05-28 16:26:28.052414 | orchestrator | 16:26:28.052 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-28 16:26:28.052422 | orchestrator | 16:26:28.052 STDOUT terraform:  } 2025-05-28 16:26:28.052447 | orchestrator | 16:26:28.052 STDOUT terraform:  + allowed_address_pairs { 2025-05-28 16:26:28.052477 | orchestrator | 16:26:28.052 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-28 16:26:28.052484 | orchestrator | 16:26:28.052 STDOUT terraform:  } 2025-05-28 16:26:28.052508 | orchestrator | 16:26:28.052 STDOUT terraform:  + binding (known after apply) 2025-05-28 16:26:28.052515 | orchestrator | 16:26:28.052 STDOUT terraform:  + fixed_ip { 2025-05-28 16:26:28.052569 | orchestrator | 16:26:28.052 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-05-28 16:26:28.052614 | orchestrator | 16:26:28.052 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-28 16:26:28.052621 | orchestrator | 16:26:28.052 STDOUT terraform:  } 2025-05-28 16:26:28.052627 | orchestrator | 16:26:28.052 STDOUT terraform:  } 2025-05-28 16:26:28.052682 | orchestrator | 16:26:28.052 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-05-28 16:26:28.052728 | orchestrator | 16:26:28.052 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-28 16:26:28.052763 | orchestrator | 16:26:28.052 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-28 16:26:28.052795 | orchestrator | 16:26:28.052 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-28 16:26:28.052837 | orchestrator | 16:26:28.052 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-28 16:26:28.052874 | orchestrator | 16:26:28.052 STDOUT terraform:  + all_tags = (known after apply) 2025-05-28 16:26:28.052908 | orchestrator | 16:26:28.052 STDOUT terraform:  + device_id = (known after apply) 2025-05-28 16:26:28.052948 | orchestrator | 16:26:28.052 STDOUT terraform:  + device_owner = (known after apply) 2025-05-28 16:26:28.052991 | orchestrator | 16:26:28.052 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-28 16:26:28.053023 | orchestrator | 16:26:28.052 STDOUT terraform:  + dns_name = (known after apply) 2025-05-28 16:26:28.053061 | orchestrator | 16:26:28.053 STDOUT terraform:  + id = (known after apply) 2025-05-28 16:26:28.053099 | orchestrator | 16:26:28.053 STDOUT terraform:  + mac_address = (known after apply) 2025-05-28 16:26:28.053141 | orchestrator | 16:26:28.053 STDOUT terraform:  + network_id = (known after apply) 2025-05-28 16:26:28.053178 | orchestrator | 16:26:28.053 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-28 16:26:28.053215 | orchestrator | 16:26:28.053 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-28 16:26:28.053253 | orchestrator | 16:26:28.053 STDOUT terraform:  + region = (known after apply) 2025-05-28 16:26:28.053288 | orchestrator | 16:26:28.053 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-28 16:26:28.053326 | orchestrator | 16:26:28.053 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-28 16:26:28.053348 | orchestrator | 16:26:28.053 STDOUT terraform:  + allowed_address_pairs { 2025-05-28 16:26:28.053378 | orchestrator | 16:26:28.053 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-28 16:26:28.053387 | orchestrator | 16:26:28.053 STDOUT terraform:  } 2025-05-28 16:26:28.053410 | orchestrator | 16:26:28.053 STDOUT terraform:  + allowed_address_pairs { 2025-05-28 16:26:28.053441 | orchestrator | 16:26:28.053 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-28 16:26:28.053450 | orchestrator | 16:26:28.053 STDOUT terraform:  } 2025-05-28 16:26:28.053475 | orchestrator | 16:26:28.053 STDOUT terraform:  + allowed_address_pairs { 2025-05-28 16:26:28.053507 | orchestrator | 16:26:28.053 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-28 16:26:28.053514 | orchestrator | 16:26:28.053 STDOUT terraform:  } 2025-05-28 16:26:28.053541 | orchestrator | 16:26:28.053 STDOUT terraform:  + allowed_address_pairs { 2025-05-28 16:26:28.053577 | orchestrator | 16:26:28.053 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-28 16:26:28.053596 | orchestrator | 16:26:28.053 STDOUT terraform:  } 2025-05-28 16:26:28.053621 | orchestrator | 16:26:28.053 STDOUT terraform:  + binding (known after apply) 2025-05-28 16:26:28.053628 | orchestrator | 16:26:28.053 STDOUT terraform:  + fixed_ip { 2025-05-28 16:26:28.053659 | orchestrator | 16:26:28.053 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-05-28 16:26:28.053693 | orchestrator | 16:26:28.053 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-28 16:26:28.053700 | orchestrator | 16:26:28.053 STDOUT terraform:  } 2025-05-28 16:26:28.053710 | orchestrator | 16:26:28.053 STDOUT terraform:  } 2025-05-28 16:26:28.053761 | orchestrator | 16:26:28.053 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-05-28 16:26:28.053805 | orchestrator | 16:26:28.053 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-28 16:26:28.053843 | orchestrator | 16:26:28.053 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-28 16:26:28.053879 | orchestrator | 16:26:28.053 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-28 16:26:28.053915 | orchestrator | 16:26:28.053 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-28 16:26:28.053953 | orchestrator | 16:26:28.053 STDOUT terraform:  + all_tags = (known after apply) 2025-05-28 16:26:28.053992 | orchestrator | 16:26:28.053 STDOUT terraform:  + device_id = (known after apply) 2025-05-28 16:26:28.054050 | orchestrator | 16:26:28.053 STDOUT terraform:  + device_owner = (known after apply) 2025-05-28 16:26:28.054088 | orchestrator | 16:26:28.054 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-28 16:26:28.054126 | orchestrator | 16:26:28.054 STDOUT terraform:  + dns_name = (known after apply) 2025-05-28 16:26:28.054169 | orchestrator | 16:26:28.054 STDOUT terraform:  + id = (known after apply) 2025-05-28 16:26:28.054200 | orchestrator | 16:26:28.054 STDOUT terraform:  + mac_address = (known after apply) 2025-05-28 16:26:28.054237 | orchestrator | 16:26:28.054 STDOUT terraform:  + network_id = (known after apply) 2025-05-28 16:26:28.054275 | orchestrator | 16:26:28.054 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-28 16:26:28.054312 | orchestrator | 16:26:28.054 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-28 16:26:28.054349 | orchestrator | 16:26:28.054 STDOUT terraform:  + region = (known after apply) 2025-05-28 16:26:28.054389 | orchestrator | 16:26:28.054 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-28 16:26:28.054427 | orchestrator | 16:26:28.054 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-28 16:26:28.054450 | orchestrator | 16:26:28.054 STDOUT terraform:  + allowed_address_pairs { 2025-05-28 16:26:28.054482 | orchestrator | 16:26:28.054 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-28 16:26:28.054490 | orchestrator | 16:26:28.054 STDOUT terraform:  } 2025-05-28 16:26:28.054514 | orchestrator | 16:26:28.054 STDOUT terraform:  + allowed_address_pairs { 2025-05-28 16:26:28.054556 | orchestrator | 16:26:28.054 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-28 16:26:28.054589 | orchestrator | 16:26:28.054 STDOUT terraform:  } 2025-05-28 16:26:28.054610 | orchestrator | 16:26:28.054 STDOUT terraform:  + allowed_address_pairs { 2025-05-28 16:26:28.054639 | orchestrator | 16:26:28.054 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-28 16:26:28.054646 | orchestrator | 16:26:28.054 STDOUT terraform:  } 2025-05-28 16:26:28.054674 | orchestrator | 16:26:28.054 STDOUT terraform:  + allowed_address_pairs { 2025-05-28 16:26:28.054704 | orchestrator | 16:26:28.054 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-28 16:26:28.054717 | orchestrator | 16:26:28.054 STDOUT terraform:  } 2025-05-28 16:26:28.054742 | orchestrator | 16:26:28.054 STDOUT terraform:  + binding (known after apply) 2025-05-28 16:26:28.054749 | orchestrator | 16:26:28.054 STDOUT terraform:  + fixed_ip { 2025-05-28 16:26:28.054783 | orchestrator | 16:26:28.054 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-05-28 16:26:28.054813 | orchestrator | 16:26:28.054 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-28 16:26:28.054820 | orchestrator | 16:26:28.054 STDOUT terraform:  } 2025-05-28 16:26:28.054840 | orchestrator | 16:26:28.054 STDOUT terraform:  } 2025-05-28 16:26:28.054887 | orchestrator | 16:26:28.054 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-05-28 16:26:28.054933 | orchestrator | 16:26:28.054 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-28 16:26:28.054971 | orchestrator | 16:26:28.054 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-28 16:26:28.055008 | orchestrator | 16:26:28.054 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-28 16:26:28.055045 | orchestrator | 16:26:28.055 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-28 16:26:28.055083 | orchestrator | 16:26:28.055 STDOUT terraform:  + all_tags = (known after apply) 2025-05-28 16:26:28.055121 | orchestrator | 16:26:28.055 STDOUT terraform:  + device_id = (known after apply) 2025-05-28 16:26:28.055160 | orchestrator | 16:26:28.055 STDOUT terraform:  + device_owner = (known after apply) 2025-05-28 16:26:28.055199 | orchestrator | 16:26:28.055 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-28 16:26:28.055239 | orchestrator | 16:26:28.055 STDOUT terraform:  + dns_name = (known after apply) 2025-05-28 16:26:28.055278 | orchestrator | 16:26:28.055 STDOUT terraform:  + id = (known after apply) 2025-05-28 16:26:28.055314 | orchestrator | 16:26:28.055 STDOUT terraform:  + mac_address = (known after apply) 2025-05-28 16:26:28.055352 | orchestrator | 16:26:28.055 STDOUT terraform:  + network_id = (known after apply) 2025-05-28 16:26:28.055389 | orchestrator | 16:26:28.055 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-28 16:26:28.055427 | orchestrator | 16:26:28.055 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-28 16:26:28.055465 | orchestrator | 16:26:28.055 STDOUT terraform:  + region = (known after apply) 2025-05-28 16:26:28.055503 | orchestrator | 16:26:28.055 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-28 16:26:28.055538 | orchestrator | 16:26:28.055 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-28 16:26:28.055578 | orchestrator | 16:26:28.055 STDOUT terraform:  + allowed_address_pairs { 2025-05-28 16:26:28.055614 | orchestrator | 16:26:28.055 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-28 16:26:28.055621 | orchestrator | 16:26:28.055 STDOUT terraform:  } 2025-05-28 16:26:28.055646 | orchestrator | 16:26:28.055 STDOUT terraform:  + allowed_address_pairs { 2025-05-28 16:26:28.055678 | orchestrator | 16:26:28.055 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-28 16:26:28.055691 | orchestrator | 16:26:28.055 STDOUT terraform:  } 2025-05-28 16:26:28.055711 | orchestrator | 16:26:28.055 STDOUT terraform:  + allowed_address_pairs { 2025-05-28 16:26:28.055743 | orchestrator | 16:26:28.055 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-28 16:26:28.055750 | orchestrator | 16:26:28.055 STDOUT terraform:  } 2025-05-28 16:26:28.055774 | orchestrator | 16:26:28.055 STDOUT terraform:  + allowed_address_pairs { 2025-05-28 16:26:28.055805 | orchestrator | 16:26:28.055 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-28 16:26:28.055812 | orchestrator | 16:26:28.055 STDOUT terraform:  } 2025-05-28 16:26:28.055843 | orchestrator | 16:26:28.055 STDOUT terraform:  + binding (known after apply) 2025-05-28 16:26:28.055850 | orchestrator | 16:26:28.055 STDOUT terraform:  + fixed_ip { 2025-05-28 16:26:28.055881 | orchestrator | 16:26:28.055 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-05-28 16:26:28.055913 | orchestrator | 16:26:28.055 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-28 16:26:28.055920 | orchestrator | 16:26:28.055 STDOUT terraform:  } 2025-05-28 16:26:28.055945 | orchestrator | 16:26:28.055 STDOUT terraform:  } 2025-05-28 16:26:28.055987 | orchestrator | 16:26:28.055 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-05-28 16:26:28.056055 | orchestrator | 16:26:28.055 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-28 16:26:28.056105 | orchestrator | 16:26:28.056 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-28 16:26:28.056144 | orchestrator | 16:26:28.056 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-28 16:26:28.056182 | orchestrator | 16:26:28.056 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-28 16:26:28.056225 | orchestrator | 16:26:28.056 STDOUT terraform:  + all_tags = (known after apply) 2025-05-28 16:26:28.056260 | orchestrator | 16:26:28.056 STDOUT terraform:  + device_id = (known after apply) 2025-05-28 16:26:28.056299 | orchestrator | 16:26:28.056 STDOUT terraform:  + device_owner = (known after apply) 2025-05-28 16:26:28.056335 | orchestrator | 16:26:28.056 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-28 16:26:28.056374 | orchestrator | 16:26:28.056 STDOUT terraform:  + dns_name = (known after apply) 2025-05-28 16:26:28.056412 | orchestrator | 16:26:28.056 STDOUT terraform:  + id = (known after apply) 2025-05-28 16:26:28.056448 | orchestrator | 16:26:28.056 STDOUT terraform:  + mac_address = (known after apply) 2025-05-28 16:26:28.056488 | orchestrator | 16:26:28.056 STDOUT terraform:  + network_id = (known after apply) 2025-05-28 16:26:28.056525 | orchestrator | 16:26:28.056 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-28 16:26:28.056593 | orchestrator | 16:26:28.056 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-28 16:26:28.056631 | orchestrator | 16:26:28.056 STDOUT terraform:  + region = (known after apply) 2025-05-28 16:26:28.056672 | orchestrator | 16:26:28.056 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-28 16:26:28.056706 | orchestrator | 16:26:28.056 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-28 16:26:28.056728 | orchestrator | 16:26:28.056 STDOUT terraform:  + allowed_address_pairs { 2025-05-28 16:26:28.056759 | orchestrator | 16:26:28.056 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-28 16:26:28.056766 | orchestrator | 16:26:28.056 STDOUT terraform:  } 2025-05-28 16:26:28.056794 | orchestrator | 16:26:28.056 STDOUT terraform:  + allowed_address_pairs { 2025-05-28 16:26:28.056822 | orchestrator | 16:26:28.056 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-28 16:26:28.056829 | orchestrator | 16:26:28.056 STDOUT terraform:  } 2025-05-28 16:26:28.056854 | orchestrator | 16:26:28.056 STDOUT terraform:  + allowed_address_pairs { 2025-05-28 16:26:28.056884 | orchestrator | 16:26:28.056 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-28 16:26:28.056891 | orchestrator | 16:26:28.056 STDOUT terraform:  } 2025-05-28 16:26:28.056919 | orchestrator | 16:26:28.056 STDOUT terraform:  + allowed_address_pairs { 2025-05-28 16:26:28.056949 | orchestrator | 16:26:28.056 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-28 16:26:28.056956 | orchestrator | 16:26:28.056 STDOUT terraform:  } 2025-05-28 16:26:28.056987 | orchestrator | 16:26:28.056 STDOUT terraform:  + binding (known after apply) 2025-05-28 16:26:28.056994 | orchestrator | 16:26:28.056 STDOUT terraform:  + fixed_ip { 2025-05-28 16:26:28.057025 | orchestrator | 16:26:28.056 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-05-28 16:26:28.057057 | orchestrator | 16:26:28.057 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-28 16:26:28.057064 | orchestrator | 16:26:28.057 STDOUT terraform:  } 2025-05-28 16:26:28.057084 | orchestrator | 16:26:28.057 STDOUT terraform:  } 2025-05-28 16:26:28.057132 | orchestrator | 16:26:28.057 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-05-28 16:26:28.057177 | orchestrator | 16:26:28.057 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-28 16:26:28.057215 | orchestrator | 16:26:28.057 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-28 16:26:28.057253 | orchestrator | 16:26:28.057 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-28 16:26:28.057291 | orchestrator | 16:26:28.057 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-28 16:26:28.057329 | orchestrator | 16:26:28.057 STDOUT terraform:  + all_tags = (known after apply) 2025-05-28 16:26:28.057378 | orchestrator | 16:26:28.057 STDOUT terraform:  + device_id = (known after apply) 2025-05-28 16:26:28.057414 | orchestrator | 16:26:28.057 STDOUT terraform:  + device_owner = (known after apply) 2025-05-28 16:26:28.057451 | orchestrator | 16:26:28.057 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-28 16:26:28.057489 | orchestrator | 16:26:28.057 STDOUT terraform:  + dns_name = (known after apply) 2025-05-28 16:26:28.057528 | orchestrator | 16:26:28.057 STDOUT terraform:  + id = (known after apply) 2025-05-28 16:26:28.057584 | orchestrator | 16:26:28.057 STDOUT terraform:  + mac_address = (known after apply) 2025-05-28 16:26:28.057622 | orchestrator | 16:26:28.057 STDOUT terraform:  + network_id = (known after apply) 2025-05-28 16:26:28.057658 | orchestrator | 16:26:28.057 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-28 16:26:28.057697 | orchestrator | 16:26:28.057 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-28 16:26:28.057735 | orchestrator | 16:26:28.057 STDOUT terraform:  + region = (known after apply) 2025-05-28 16:26:28.057773 | orchestrator | 16:26:28.057 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-28 16:26:28.057812 | orchestrator | 16:26:28.057 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-28 16:26:28.057837 | orchestrator | 16:26:28.057 STDOUT terraform:  + allowed_address_pairs { 2025-05-28 16:26:28.057864 | orchestrator | 16:26:28.057 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-28 16:26:28.057872 | orchestrator | 16:26:28.057 STDOUT terraform:  } 2025-05-28 16:26:28.057897 | orchestrator | 16:26:28.057 STDOUT terraform:  + allowed_address_pairs { 2025-05-28 16:26:28.057931 | orchestrator | 16:26:28.057 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-28 16:26:28.057938 | orchestrator | 16:26:28.057 STDOUT terraform:  } 2025-05-28 16:26:28.057965 | orchestrator | 16:26:28.057 STDOUT terraform:  + allowed_address_pairs { 2025-05-28 16:26:28.057998 | orchestrator | 16:26:28.057 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-28 16:26:28.058005 | orchestrator | 16:26:28.057 STDOUT terraform:  } 2025-05-28 16:26:28.058047 | orchestrator | 16:26:28.058 STDOUT terraform:  + allowed_address_pairs { 2025-05-28 16:26:28.058074 | orchestrator | 16:26:28.058 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-28 16:26:28.058081 | orchestrator | 16:26:28.058 STDOUT terraform:  } 2025-05-28 16:26:28.058111 | orchestrator | 16:26:28.058 STDOUT terraform:  + binding (known after apply) 2025-05-28 16:26:28.058118 | orchestrator | 16:26:28.058 STDOUT terraform:  + fixed_ip { 2025-05-28 16:26:28.058183 | orchestrator | 16:26:28.058 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-05-28 16:26:28.058221 | orchestrator | 16:26:28.058 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-28 16:26:28.058230 | orchestrator | 16:26:28.058 STDOUT terraform:  } 2025-05-28 16:26:28.058252 | orchestrator | 16:26:28.058 STDOUT terraform:  } 2025-05-28 16:26:28.058305 | orchestrator | 16:26:28.058 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-05-28 16:26:28.058355 | orchestrator | 16:26:28.058 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-05-28 16:26:28.058395 | orchestrator | 16:26:28.058 STDOUT terraform:  + force_destroy = false 2025-05-28 16:26:28.058428 | orchestrator | 16:26:28.058 STDOUT terraform:  + id = (known after apply) 2025-05-28 16:26:28.058458 | orchestrator | 16:26:28.058 STDOUT terraform:  + port_id = (known after apply) 2025-05-28 16:26:28.058492 | orchestrator | 16:26:28.058 STDOUT terraform:  + region = (known after apply) 2025-05-28 16:26:28.058520 | orchestrator | 16:26:28.058 STDOUT terraform:  + router_id = (known after apply) 2025-05-28 16:26:28.058588 | orchestrator | 16:26:28.058 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-28 16:26:28.058595 | orchestrator | 16:26:28.058 STDOUT terraform:  } 2025-05-28 16:26:28.058614 | orchestrator | 16:26:28.058 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-05-28 16:26:28.058654 | orchestrator | 16:26:28.058 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-05-28 16:26:28.058689 | orchestrator | 16:26:28.058 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-28 16:26:28.058728 | orchestrator | 16:26:28.058 STDOUT terraform:  + all_tags = (known after apply) 2025-05-28 16:26:28.058753 | orchestrator | 16:26:28.058 STDOUT terraform:  + availability_zone_hints = [ 2025-05-28 16:26:28.058762 | orchestrator | 16:26:28.058 STDOUT terraform:  + "nova", 2025-05-28 16:26:28.058784 | orchestrator | 16:26:28.058 STDOUT terraform:  ] 2025-05-28 16:26:28.058823 | orchestrator | 16:26:28.058 STDOUT terraform:  + distributed = (known after apply) 2025-05-28 16:26:28.058860 | orchestrator | 16:26:28.058 STDOUT terraform:  + enable_snat = (known after apply) 2025-05-28 16:26:28.058914 | orchestrator | 16:26:28.058 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-05-28 16:26:28.058950 | orchestrator | 16:26:28.058 STDOUT terraform:  + id = (known after apply) 2025-05-28 16:26:28.058983 | orchestrator | 16:26:28.058 STDOUT terraform:  + name = "testbed" 2025-05-28 16:26:28.059021 | orchestrator | 16:26:28.058 STDOUT terraform:  + region = (known after apply) 2025-05-28 16:26:28.059059 | orchestrator | 16:26:28.059 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-28 16:26:28.059089 | orchestrator | 16:26:28.059 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-05-28 16:26:28.059096 | orchestrator | 16:26:28.059 STDOUT terraform:  } 2025-05-28 16:26:28.059153 | orchestrator | 16:26:28.059 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-05-28 16:26:28.059208 | orchestrator | 16:26:28.059 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-05-28 16:26:28.059229 | orchestrator | 16:26:28.059 STDOUT terraform:  + description = "ssh" 2025-05-28 16:26:28.059255 | orchestrator | 16:26:28.059 STDOUT terraform:  + direction = "ingress" 2025-05-28 16:26:28.059278 | orchestrator | 16:26:28.059 STDOUT terraform:  + ethertype = "IPv4" 2025-05-28 16:26:28.059310 | orchestrator | 16:26:28.059 STDOUT terraform:  + id = (known after apply) 2025-05-28 16:26:28.059333 | orchestrator | 16:26:28.059 STDOUT terraform:  + port_range_max = 22 2025-05-28 16:26:28.059354 | orchestrator | 16:26:28.059 STDOUT terraform:  + port_range_min = 22 2025-05-28 16:26:28.059377 | orchestrator | 16:26:28.059 STDOUT terraform:  + protocol = "tcp" 2025-05-28 16:26:28.059408 | orchestrator | 16:26:28.059 STDOUT terraform:  + region = (known after apply) 2025-05-28 16:26:28.059439 | orchestrator | 16:26:28.059 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-28 16:26:28.059466 | orchestrator | 16:26:28.059 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-28 16:26:28.059497 | orchestrator | 16:26:28.059 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-28 16:26:28.059530 | orchestrator | 16:26:28.059 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-28 16:26:28.059537 | orchestrator | 16:26:28.059 STDOUT terraform:  } 2025-05-28 16:26:28.059638 | orchestrator | 16:26:28.059 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-05-28 16:26:28.059697 | orchestrator | 16:26:28.059 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-05-28 16:26:28.059725 | orchestrator | 16:26:28.059 STDOUT terraform:  + description = "wireguard" 2025-05-28 16:26:28.059751 | orchestrator | 16:26:28.059 STDOUT terraform:  + direction = "ingress" 2025-05-28 16:26:28.059775 | orchestrator | 16:26:28.059 STDOUT terraform:  + ethertype = "IPv4" 2025-05-28 16:26:28.059808 | orchestrator | 16:26:28.059 STDOUT terraform:  + id = (known after apply) 2025-05-28 16:26:28.059830 | orchestrator | 16:26:28.059 STDOUT terraform:  + port_range_max = 51820 2025-05-28 16:26:28.059854 | orchestrator | 16:26:28.059 STDOUT terraform:  + port_range_min = 51820 2025-05-28 16:26:28.059877 | orchestrator | 16:26:28.059 STDOUT terraform:  + protocol = "udp" 2025-05-28 16:26:28.059909 | orchestrator | 16:26:28.059 STDOUT terraform:  + region = (known after apply) 2025-05-28 16:26:28.059943 | orchestrator | 16:26:28.059 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-28 16:26:28.059970 | orchestrator | 16:26:28.059 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-28 16:26:28.060002 | orchestrator | 16:26:28.059 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-28 16:26:28.060033 | orchestrator | 16:26:28.059 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-28 16:26:28.060049 | orchestrator | 16:26:28.060 STDOUT terraform:  } 2025-05-28 16:26:28.060111 | orchestrator | 16:26:28.060 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-05-28 16:26:28.060170 | orchestrator | 16:26:28.060 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-05-28 16:26:28.060193 | orchestrator | 16:26:28.060 STDOUT terraform:  + direction = "ingress" 2025-05-28 16:26:28.060216 | orchestrator | 16:26:28.060 STDOUT terraform:  + ethertype = "IPv4" 2025-05-28 16:26:28.060252 | orchestrator | 16:26:28.060 STDOUT terraform:  + id = (known after apply) 2025-05-28 16:26:28.060270 | orchestrator | 16:26:28.060 STDOUT terraform:  + protocol = "tcp" 2025-05-28 16:26:28.060302 | orchestrator | 16:26:28.060 STDOUT terraform:  + region = (known after apply) 2025-05-28 16:26:28.060334 | orchestrator | 16:26:28.060 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-28 16:26:28.060366 | orchestrator | 16:26:28.060 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-05-28 16:26:28.060400 | orchestrator | 16:26:28.060 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-28 16:26:28.060430 | orchestrator | 16:26:28.060 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-28 16:26:28.060444 | orchestrator | 16:26:28.060 STDOUT terraform:  } 2025-05-28 16:26:28.060495 | orchestrator | 16:26:28.060 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-05-28 16:26:28.060567 | orchestrator | 16:26:28.060 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-05-28 16:26:28.060591 | orchestrator | 16:26:28.060 STDOUT terraform:  + direction = "ingress" 2025-05-28 16:26:28.060616 | orchestrator | 16:26:28.060 STDOUT terraform:  + ethertype = "IPv4" 2025-05-28 16:26:28.060650 | orchestrator | 16:26:28.060 STDOUT terraform:  + id = (known after apply) 2025-05-28 16:26:28.060674 | orchestrator | 16:26:28.060 STDOUT terraform:  + protocol = "udp" 2025-05-28 16:26:28.060706 | orchestrator | 16:26:28.060 STDOUT terraform:  + region = (known after apply) 2025-05-28 16:26:28.060737 | orchestrator | 16:26:28.060 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-28 16:26:28.060770 | orchestrator | 16:26:28.060 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-05-28 16:26:28.060800 | orchestrator | 16:26:28.060 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-28 16:26:28.060833 | orchestrator | 16:26:28.060 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-28 16:26:28.060840 | orchestrator | 16:26:28.060 STDOUT terraform:  } 2025-05-28 16:26:28.060899 | orchestrator | 16:26:28.060 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-05-28 16:26:28.060955 | orchestrator | 16:26:28.060 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-05-28 16:26:28.060983 | orchestrator | 16:26:28.060 STDOUT terraform:  + direction = "ingress" 2025-05-28 16:26:28.060999 | orchestrator | 16:26:28.060 STDOUT terraform:  + ethertype = "IPv4" 2025-05-28 16:26:28.061031 | orchestrator | 16:26:28.060 STDOUT terraform:  + id = (known after apply) 2025-05-28 16:26:28.061053 | orchestrator | 16:26:28.061 STDOUT terraform:  + protocol = "icmp" 2025-05-28 16:26:28.061087 | orchestrator | 16:26:28.061 STDOUT terraform:  + region = (known after apply) 2025-05-28 16:26:28.061120 | orchestrator | 16:26:28.061 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-28 16:26:28.061145 | orchestrator | 16:26:28.061 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-28 16:26:28.061179 | orchestrator | 16:26:28.061 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-28 16:26:28.061211 | orchestrator | 16:26:28.061 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-28 16:26:28.061222 | orchestrator | 16:26:28.061 STDOUT terraform:  } 2025-05-28 16:26:28.061272 | orchestrator | 16:26:28.061 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-05-28 16:26:28.061321 | orchestrator | 16:26:28.061 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-05-28 16:26:28.061347 | orchestrator | 16:26:28.061 STDOUT terraform:  + direction = "ingress" 2025-05-28 16:26:28.061369 | orchestrator | 16:26:28.061 STDOUT terraform:  + ethertype = "IPv4" 2025-05-28 16:26:28.061403 | orchestrator | 16:26:28.061 STDOUT terraform:  + id = (known after apply) 2025-05-28 16:26:28.061430 | orchestrator | 16:26:28.061 STDOUT terraform:  + protocol = "tcp" 2025-05-28 16:26:28.061459 | orchestrator | 16:26:28.061 STDOUT terraform:  + region = (known after apply) 2025-05-28 16:26:28.061490 | orchestrator | 16:26:28.061 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-28 16:26:28.061516 | orchestrator | 16:26:28.061 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-28 16:26:28.061585 | orchestrator | 16:26:28.061 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-28 16:26:28.061593 | orchestrator | 16:26:28.061 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-28 16:26:28.061599 | orchestrator | 16:26:28.061 STDOUT terraform:  } 2025-05-28 16:26:28.061655 | orchestrator | 16:26:28.061 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-05-28 16:26:28.061706 | orchestrator | 16:26:28.061 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-05-28 16:26:28.061731 | orchestrator | 16:26:28.061 STDOUT terraform:  + direction = "ingress" 2025-05-28 16:26:28.061754 | orchestrator | 16:26:28.061 STDOUT terraform:  + ethertype = "IPv4" 2025-05-28 16:26:28.061787 | orchestrator | 16:26:28.061 STDOUT terraform:  + id = (known after apply) 2025-05-28 16:26:28.061809 | orchestrator | 16:26:28.061 STDOUT terraform:  + protocol = "udp" 2025-05-28 16:26:28.061842 | orchestrator | 16:26:28.061 STDOUT terraform:  + region = (known after apply) 2025-05-28 16:26:28.061873 | orchestrator | 16:26:28.061 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-28 16:26:28.061900 | orchestrator | 16:26:28.061 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-28 16:26:28.061932 | orchestrator | 16:26:28.061 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-28 16:26:28.061969 | orchestrator | 16:26:28.061 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-28 16:26:28.061977 | orchestrator | 16:26:28.061 STDOUT terraform:  } 2025-05-28 16:26:28.062043 | orchestrator | 16:26:28.061 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-05-28 16:26:28.062100 | orchestrator | 16:26:28.062 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-05-28 16:26:28.062108 | orchestrator | 16:26:28.062 STDOUT terraform:  + direction = "ingress" 2025-05-28 16:26:28.062143 | orchestrator | 16:26:28.062 STDOUT terraform:  + ethertype = "IPv4" 2025-05-28 16:26:28.062178 | orchestrator | 16:26:28.062 STDOUT terraform:  + id = (known after apply) 2025-05-28 16:26:28.062185 | orchestrator | 16:26:28.062 STDOUT terraform:  + protocol = "icmp" 2025-05-28 16:26:28.062228 | orchestrator | 16:26:28.062 STDOUT terraform:  + region = (known after apply) 2025-05-28 16:26:28.062261 | orchestrator | 16:26:28.062 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-28 16:26:28.062269 | orchestrator | 16:26:28.062 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-28 16:26:28.062314 | orchestrator | 16:26:28.062 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-28 16:26:28.062329 | orchestrator | 16:26:28.062 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-28 16:26:28.062353 | orchestrator | 16:26:28.062 STDOUT terraform:  } 2025-05-28 16:26:28.062404 | orchestrator | 16:26:28.062 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-05-28 16:26:28.062454 | orchestrator | 16:26:28.062 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-05-28 16:26:28.062462 | orchestrator | 16:26:28.062 STDOUT terraform:  + description = "vrrp" 2025-05-28 16:26:28.062499 | orchestrator | 16:26:28.062 STDOUT terraform:  + direction = "ingress" 2025-05-28 16:26:28.062507 | orchestrator | 16:26:28.062 STDOUT terraform:  + ethertype = "IPv4" 2025-05-28 16:26:28.062558 | orchestrator | 16:26:28.062 STDOUT terraform:  + id = (known after apply) 2025-05-28 16:26:28.062604 | orchestrator | 16:26:28.062 STDOUT terraform:  + protocol = "112" 2025-05-28 16:26:28.062636 | orchestrator | 16:26:28.062 STDOUT terraform:  + region = (known after apply) 2025-05-28 16:26:28.062668 | orchestrator | 16:26:28.062 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-28 16:26:28.062699 | orchestrator | 16:26:28.062 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-28 16:26:28.062721 | orchestrator | 16:26:28.062 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-28 16:26:28.062754 | orchestrator | 16:26:28.062 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-28 16:26:28.062759 | orchestrator | 16:26:28.062 STDOUT terraform:  } 2025-05-28 16:26:28.062814 | orchestrator | 16:26:28.062 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-05-28 16:26:28.062865 | orchestrator | 16:26:28.062 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-05-28 16:26:28.062894 | orchestrator | 16:26:28.062 STDOUT terraform:  + all_tags = (known after apply) 2025-05-28 16:26:28.062929 | orchestrator | 16:26:28.062 STDOUT terraform:  + description = "management security group" 2025-05-28 16:26:28.062962 | orchestrator | 16:26:28.062 STDOUT terraform:  + id = (known after apply) 2025-05-28 16:26:28.062992 | orchestrator | 16:26:28.062 STDOUT terraform:  + name = "testbed-management" 2025-05-28 16:26:28.063021 | orchestrator | 16:26:28.062 STDOUT terraform:  + region = (known after apply) 2025-05-28 16:26:28.063051 | orchestrator | 16:26:28.063 STDOUT terraform:  + stateful = (known after apply) 2025-05-28 16:26:28.063082 | orchestrator | 16:26:28.063 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-28 16:26:28.063087 | orchestrator | 16:26:28.063 STDOUT terraform:  } 2025-05-28 16:26:28.063139 | orchestrator | 16:26:28.063 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-05-28 16:26:28.063192 | orchestrator | 16:26:28.063 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-05-28 16:26:28.063223 | orchestrator | 16:26:28.063 STDOUT terraform:  + all_tags = (known after apply) 2025-05-28 16:26:28.063248 | orchestrator | 16:26:28.063 STDOUT terraform:  + description = "node security group" 2025-05-28 16:26:28.063263 | orchestrator | 16:26:28.063 STDOUT terraform:  + id = (known after apply) 2025-05-28 16:26:28.063302 | orchestrator | 16:26:28.063 STDOUT terraform:  + name = "testbed-node" 2025-05-28 16:26:28.063327 | orchestrator | 16:26:28.063 STDOUT terraform:  + region = (known after apply) 2025-05-28 16:26:28.063358 | orchestrator | 16:26:28.063 STDOUT terraform:  + stateful = (known after apply) 2025-05-28 16:26:28.063389 | orchestrator | 16:26:28.063 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-28 16:26:28.063395 | orchestrator | 16:26:28.063 STDOUT terraform:  } 2025-05-28 16:26:28.063449 | orchestrator | 16:26:28.063 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-05-28 16:26:28.063494 | orchestrator | 16:26:28.063 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-05-28 16:26:28.063532 | orchestrator | 16:26:28.063 STDOUT terraform:  + all_tags = (known after apply) 2025-05-28 16:26:28.063542 | orchestrator | 16:26:28.063 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-05-28 16:26:28.063599 | orchestrator | 16:26:28.063 STDOUT terraform:  + dns_nameservers = [ 2025-05-28 16:26:28.063611 | orchestrator | 16:26:28.063 STDOUT terraform:  + "8.8.8.8", 2025-05-28 16:26:28.063619 | orchestrator | 16:26:28.063 STDOUT terraform:  + "9.9.9.9", 2025-05-28 16:26:28.063646 | orchestrator | 16:26:28.063 STDOUT terraform:  ] 2025-05-28 16:26:28.063653 | orchestrator | 16:26:28.063 STDOUT terraform:  + enable_dhcp = true 2025-05-28 16:26:28.063697 | orchestrator | 16:26:28.063 STDOUT terraform:  + gateway_ip = (known after apply) 2025-05-28 16:26:28.063731 | orchestrator | 16:26:28.063 STDOUT terraform:  + id = (known after apply) 2025-05-28 16:26:28.063738 | orchestrator | 16:26:28.063 STDOUT terraform:  + ip_version = 4 2025-05-28 16:26:28.063781 | orchestrator | 16:26:28.063 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-05-28 16:26:28.063815 | orchestrator | 16:26:28.063 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-05-28 16:26:28.063852 | orchestrator | 16:26:28.063 STDOUT terraform:  + name = "subnet-testbed-management" 2025-05-28 16:26:28.063885 | orchestrator | 16:26:28.063 STDOUT terraform:  + network_id = (known after apply) 2025-05-28 16:26:28.063892 | orchestrator | 16:26:28.063 STDOUT terraform:  + no_gateway = false 2025-05-28 16:26:28.063933 | orchestrator | 16:26:28.063 STDOUT terraform:  + region = (known after apply) 2025-05-28 16:26:28.063966 | orchestrator | 16:26:28.063 STDOUT terraform:  + service_types = (known after apply) 2025-05-28 16:26:28.063997 | orchestrator | 16:26:28.063 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-28 16:26:28.064005 | orchestrator | 16:26:28.063 STDOUT terraform:  + allocation_pool { 2025-05-28 16:26:28.064038 | orchestrator | 16:26:28.063 STDOUT terraform:  + end = "192.168.31.250" 2025-05-28 16:26:28.064062 | orchestrator | 16:26:28.064 STDOUT terraform:  + start = "192.168.31.200" 2025-05-28 16:26:28.064067 | orchestrator | 16:26:28.064 STDOUT terraform:  } 2025-05-28 16:26:28.064073 | orchestrator | 16:26:28.064 STDOUT terraform:  } 2025-05-28 16:26:28.064113 | orchestrator | 16:26:28.064 STDOUT terraform:  # terraform_data.image will be created 2025-05-28 16:26:28.064148 | orchestrator | 16:26:28.064 STDOUT terraform:  + resource "terraform_data" "image" { 2025-05-28 16:26:28.064155 | orchestrator | 16:26:28.064 STDOUT terraform:  + id = (known after apply) 2025-05-28 16:26:28.064185 | orchestrator | 16:26:28.064 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-05-28 16:26:28.064192 | orchestrator | 16:26:28.064 STDOUT terraform:  + output = (known after apply) 2025-05-28 16:26:28.064200 | orchestrator | 16:26:28.064 STDOUT terraform:  } 2025-05-28 16:26:28.064245 | orchestrator | 16:26:28.064 STDOUT terraform:  # terraform_data.image_node will be created 2025-05-28 16:26:28.064275 | orchestrator | 16:26:28.064 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-05-28 16:26:28.064284 | orchestrator | 16:26:28.064 STDOUT terraform:  + id = (known after apply) 2025-05-28 16:26:28.064316 | orchestrator | 16:26:28.064 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-05-28 16:26:28.064325 | orchestrator | 16:26:28.064 STDOUT terraform:  + output = (known after apply) 2025-05-28 16:26:28.064353 | orchestrator | 16:26:28.064 STDOUT terraform:  } 2025-05-28 16:26:28.064387 | orchestrator | 16:26:28.064 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-05-28 16:26:28.064394 | orchestrator | 16:26:28.064 STDOUT terraform: Changes to Outputs: 2025-05-28 16:26:28.064423 | orchestrator | 16:26:28.064 STDOUT terraform:  + manager_address = (sensitive value) 2025-05-28 16:26:28.064447 | orchestrator | 16:26:28.064 STDOUT terraform:  + private_key = (sensitive value) 2025-05-28 16:26:28.331076 | orchestrator | 16:26:28.329 STDOUT terraform: terraform_data.image_node: Creating... 2025-05-28 16:26:28.331198 | orchestrator | 16:26:28.330 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=2c0218b1-7b5f-0b26-34eb-694a79105966] 2025-05-28 16:26:28.331217 | orchestrator | 16:26:28.330 STDOUT terraform: terraform_data.image: Creating... 2025-05-28 16:26:28.331876 | orchestrator | 16:26:28.331 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=5d68bcee-de65-2d19-af20-349f5f6c8362] 2025-05-28 16:26:28.349184 | orchestrator | 16:26:28.349 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-05-28 16:26:28.349265 | orchestrator | 16:26:28.349 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-05-28 16:26:28.358839 | orchestrator | 16:26:28.358 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-05-28 16:26:28.358886 | orchestrator | 16:26:28.358 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-05-28 16:26:28.359278 | orchestrator | 16:26:28.359 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-05-28 16:26:28.361289 | orchestrator | 16:26:28.361 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-05-28 16:26:28.363844 | orchestrator | 16:26:28.363 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-05-28 16:26:28.364013 | orchestrator | 16:26:28.363 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-05-28 16:26:28.365724 | orchestrator | 16:26:28.365 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-05-28 16:26:28.368896 | orchestrator | 16:26:28.368 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-05-28 16:26:28.773671 | orchestrator | 16:26:28.773 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 1s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-05-28 16:26:28.782409 | orchestrator | 16:26:28.781 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-05-28 16:26:28.785169 | orchestrator | 16:26:28.783 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 1s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-05-28 16:26:28.788891 | orchestrator | 16:26:28.788 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-05-28 16:26:28.864580 | orchestrator | 16:26:28.864 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2025-05-28 16:26:28.870478 | orchestrator | 16:26:28.870 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-05-28 16:26:34.339723 | orchestrator | 16:26:34.339 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 6s [id=44259db8-bc5d-40cd-b269-4ce9d5efe55e] 2025-05-28 16:26:34.352082 | orchestrator | 16:26:34.351 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-05-28 16:26:38.360965 | orchestrator | 16:26:38.360 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Still creating... [10s elapsed] 2025-05-28 16:26:38.361899 | orchestrator | 16:26:38.361 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Still creating... [10s elapsed] 2025-05-28 16:26:38.365199 | orchestrator | 16:26:38.364 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Still creating... [10s elapsed] 2025-05-28 16:26:38.365316 | orchestrator | 16:26:38.365 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Still creating... [10s elapsed] 2025-05-28 16:26:38.366819 | orchestrator | 16:26:38.366 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Still creating... [10s elapsed] 2025-05-28 16:26:38.370064 | orchestrator | 16:26:38.369 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Still creating... [10s elapsed] 2025-05-28 16:26:38.783020 | orchestrator | 16:26:38.782 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Still creating... [10s elapsed] 2025-05-28 16:26:38.790354 | orchestrator | 16:26:38.790 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Still creating... [10s elapsed] 2025-05-28 16:26:38.871929 | orchestrator | 16:26:38.871 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Still creating... [10s elapsed] 2025-05-28 16:26:38.937296 | orchestrator | 16:26:38.937 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 11s [id=0444fcd6-ace4-41be-a60f-d61a86741ad0] 2025-05-28 16:26:38.950587 | orchestrator | 16:26:38.950 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 11s [id=c3ba669b-02ce-4ac9-8d34-f5b1bbc1f6b4] 2025-05-28 16:26:38.950846 | orchestrator | 16:26:38.950 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-05-28 16:26:38.956656 | orchestrator | 16:26:38.956 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-05-28 16:26:38.957318 | orchestrator | 16:26:38.957 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 11s [id=66780fe2-f30a-4cd5-a925-045679329f08] 2025-05-28 16:26:38.962079 | orchestrator | 16:26:38.961 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-05-28 16:26:38.973338 | orchestrator | 16:26:38.973 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 11s [id=d5a98c17-e489-4dc0-a000-f021a8d49d4d] 2025-05-28 16:26:38.979287 | orchestrator | 16:26:38.979 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-05-28 16:26:38.979501 | orchestrator | 16:26:38.979 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 11s [id=705788e5-cc1d-4d40-94fd-fb0e2f22a483] 2025-05-28 16:26:38.986411 | orchestrator | 16:26:38.986 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-05-28 16:26:39.008463 | orchestrator | 16:26:39.008 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 11s [id=3045bd6c-b8ff-4958-af32-f9dea72800f3] 2025-05-28 16:26:39.022486 | orchestrator | 16:26:39.022 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-05-28 16:26:39.030487 | orchestrator | 16:26:39.030 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=3cbe8cc44d72ce5aee82d603558488b64d1ce594] 2025-05-28 16:26:39.032992 | orchestrator | 16:26:39.032 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 10s [id=1369a208-db5b-4ff3-8df7-c2f8ed8178e8] 2025-05-28 16:26:39.038817 | orchestrator | 16:26:39.038 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-05-28 16:26:39.044600 | orchestrator | 16:26:39.044 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-05-28 16:26:39.052610 | orchestrator | 16:26:39.052 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=f9ce27ad35b442c9b63eade9368939eb30c1de3a] 2025-05-28 16:26:39.056378 | orchestrator | 16:26:39.056 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 10s [id=80beb2a7-6ee1-4917-8c3d-de783739f119] 2025-05-28 16:26:39.060973 | orchestrator | 16:26:39.060 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-05-28 16:26:39.064372 | orchestrator | 16:26:39.064 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 10s [id=da6420c4-4562-42e6-8445-8de06d590092] 2025-05-28 16:26:44.355394 | orchestrator | 16:26:44.355 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Still creating... [10s elapsed] 2025-05-28 16:26:44.660145 | orchestrator | 16:26:44.659 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 11s [id=1e6eba2b-be6c-49e0-873b-79f1c04551d1] 2025-05-28 16:26:45.001508 | orchestrator | 16:26:45.001 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 6s [id=4850a68c-dcfa-4f62-9100-66e4c9d87e7e] 2025-05-28 16:26:45.011041 | orchestrator | 16:26:45.010 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-05-28 16:26:48.952203 | orchestrator | 16:26:48.951 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Still creating... [10s elapsed] 2025-05-28 16:26:48.958473 | orchestrator | 16:26:48.958 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Still creating... [10s elapsed] 2025-05-28 16:26:48.963499 | orchestrator | 16:26:48.963 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Still creating... [10s elapsed] 2025-05-28 16:26:48.980770 | orchestrator | 16:26:48.980 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Still creating... [10s elapsed] 2025-05-28 16:26:48.987053 | orchestrator | 16:26:48.986 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Still creating... [10s elapsed] 2025-05-28 16:26:49.040727 | orchestrator | 16:26:49.040 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Still creating... [10s elapsed] 2025-05-28 16:26:49.295587 | orchestrator | 16:26:49.295 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 10s [id=536d5e59-8868-442a-b439-21fdbfcfc02f] 2025-05-28 16:26:50.213275 | orchestrator | 16:26:50.212 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 11s [id=f4e43ce5-2124-49cc-9590-e2dc33c78c64] 2025-05-28 16:26:50.213475 | orchestrator | 16:26:50.213 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 11s [id=eb048e8a-8419-4b97-a2c5-865582781a7c] 2025-05-28 16:26:50.221647 | orchestrator | 16:26:50.214 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 11s [id=8413bafc-5d5c-45aa-9537-e8a0170ebd39] 2025-05-28 16:26:50.221703 | orchestrator | 16:26:50.221 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 11s [id=3b110ae9-3b24-4117-b531-0e276aed65fb] 2025-05-28 16:26:50.221720 | orchestrator | 16:26:50.221 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 11s [id=3e07e7c9-91b0-4ca1-b00e-661089b639c5] 2025-05-28 16:26:52.657486 | orchestrator | 16:26:52.657 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 8s [id=a02dd16c-5fc0-426e-9aa0-7ebc416b3adb] 2025-05-28 16:26:52.668608 | orchestrator | 16:26:52.668 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-05-28 16:26:52.668838 | orchestrator | 16:26:52.668 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-05-28 16:26:52.670806 | orchestrator | 16:26:52.670 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-05-28 16:26:52.836935 | orchestrator | 16:26:52.836 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=8f42a28b-8942-4fde-a643-c1d5e75c7bc2] 2025-05-28 16:26:52.839310 | orchestrator | 16:26:52.839 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=a377f032-66d9-4b57-ae1a-6356157f7865] 2025-05-28 16:26:52.847290 | orchestrator | 16:26:52.847 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-05-28 16:26:52.847422 | orchestrator | 16:26:52.847 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-05-28 16:26:52.851630 | orchestrator | 16:26:52.851 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-05-28 16:26:52.853137 | orchestrator | 16:26:52.852 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-05-28 16:26:52.856712 | orchestrator | 16:26:52.856 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-05-28 16:26:52.856861 | orchestrator | 16:26:52.856 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-05-28 16:26:52.857080 | orchestrator | 16:26:52.856 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-05-28 16:26:52.857143 | orchestrator | 16:26:52.857 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-05-28 16:26:52.858652 | orchestrator | 16:26:52.858 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-05-28 16:26:52.994794 | orchestrator | 16:26:52.994 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=aa333707-6949-427c-a702-79d4456b94e3] 2025-05-28 16:26:53.005491 | orchestrator | 16:26:53.005 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-05-28 16:26:53.269006 | orchestrator | 16:26:53.268 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=b448ed4a-1507-4e2b-bca5-f78904be4da2] 2025-05-28 16:26:53.289972 | orchestrator | 16:26:53.289 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-05-28 16:26:53.418510 | orchestrator | 16:26:53.418 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=3a269f00-cbaf-421a-b09e-aadf074522f8] 2025-05-28 16:26:53.435070 | orchestrator | 16:26:53.434 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-05-28 16:26:53.488130 | orchestrator | 16:26:53.487 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=d5820b93-72cf-4273-b6cf-d425c6dadeaa] 2025-05-28 16:26:53.502689 | orchestrator | 16:26:53.502 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-05-28 16:26:53.598869 | orchestrator | 16:26:53.598 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=d02e607a-59d8-4069-8eb2-5525fd9ad59c] 2025-05-28 16:26:53.617809 | orchestrator | 16:26:53.617 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-05-28 16:26:53.655471 | orchestrator | 16:26:53.655 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=b27c6bb1-0740-4c7e-9b5e-ceb41f62cf60] 2025-05-28 16:26:53.671754 | orchestrator | 16:26:53.671 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-05-28 16:26:53.839393 | orchestrator | 16:26:53.839 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=7757e88b-c5d4-4d04-88bc-df10d4952ace] 2025-05-28 16:26:53.860430 | orchestrator | 16:26:53.860 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-05-28 16:26:54.000515 | orchestrator | 16:26:54.000 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=969db399-28b2-409e-9b93-7b26e8471751] 2025-05-28 16:26:54.185403 | orchestrator | 16:26:54.185 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=9e09ef1b-3a2e-4ab8-9014-a292a737435b] 2025-05-28 16:26:58.576823 | orchestrator | 16:26:58.576 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 6s [id=05fd1a50-7917-4fa1-8a17-c88eb5d6bd83] 2025-05-28 16:26:58.960966 | orchestrator | 16:26:58.960 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 6s [id=30128105-74e6-4aa0-b24c-bf62958e838e] 2025-05-28 16:26:59.099456 | orchestrator | 16:26:59.099 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 5s [id=7f5bdaf0-33a7-4ffe-9463-e9bbe4386d8d] 2025-05-28 16:26:59.295256 | orchestrator | 16:26:59.294 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 5s [id=09f3103f-bc20-4ad8-9174-ce53191e6ef4] 2025-05-28 16:26:59.663261 | orchestrator | 16:26:59.662 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 6s [id=76b944cd-d47d-41bf-a4d3-db252e3439ec] 2025-05-28 16:26:59.791047 | orchestrator | 16:26:59.790 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 6s [id=ce8694d6-5517-4b67-a2d5-e4a517a05a3e] 2025-05-28 16:27:00.240101 | orchestrator | 16:27:00.239 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 7s [id=4af5e29f-7c8d-4c17-b413-ce58a530b3ae] 2025-05-28 16:27:00.915256 | orchestrator | 16:27:00.914 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 8s [id=2f2395a2-b7a3-4eda-a488-f1e6370231d9] 2025-05-28 16:27:00.939729 | orchestrator | 16:27:00.939 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-05-28 16:27:00.957471 | orchestrator | 16:27:00.957 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-05-28 16:27:00.965203 | orchestrator | 16:27:00.965 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-05-28 16:27:00.965814 | orchestrator | 16:27:00.965 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-05-28 16:27:00.974105 | orchestrator | 16:27:00.973 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-05-28 16:27:00.977922 | orchestrator | 16:27:00.977 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-05-28 16:27:00.983956 | orchestrator | 16:27:00.983 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-05-28 16:27:07.234116 | orchestrator | 16:27:07.233 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 6s [id=e6916673-24ab-4cf3-ae0f-a6b0777d95a3] 2025-05-28 16:27:07.245414 | orchestrator | 16:27:07.245 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-05-28 16:27:07.248486 | orchestrator | 16:27:07.248 STDOUT terraform: local_file.inventory: Creating... 2025-05-28 16:27:07.253366 | orchestrator | 16:27:07.253 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-05-28 16:27:07.259648 | orchestrator | 16:27:07.259 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=7fe1032e4af1faa47e878ef64632d2a3308a0806] 2025-05-28 16:27:07.264040 | orchestrator | 16:27:07.263 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=8ddf6e54198db723747b7d014d07be84a0de935e] 2025-05-28 16:27:07.932224 | orchestrator | 16:27:07.931 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=e6916673-24ab-4cf3-ae0f-a6b0777d95a3] 2025-05-28 16:27:10.965459 | orchestrator | 16:27:10.965 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-05-28 16:27:10.966303 | orchestrator | 16:27:10.966 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-05-28 16:27:10.966439 | orchestrator | 16:27:10.966 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-05-28 16:27:10.976749 | orchestrator | 16:27:10.976 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-05-28 16:27:10.980923 | orchestrator | 16:27:10.980 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-05-28 16:27:10.987139 | orchestrator | 16:27:10.986 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-05-28 16:27:20.965745 | orchestrator | 16:27:20.965 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-05-28 16:27:20.966600 | orchestrator | 16:27:20.966 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-05-28 16:27:20.966754 | orchestrator | 16:27:20.966 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-05-28 16:27:20.977038 | orchestrator | 16:27:20.976 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-05-28 16:27:20.981419 | orchestrator | 16:27:20.981 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-05-28 16:27:20.987926 | orchestrator | 16:27:20.987 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-05-28 16:27:21.356173 | orchestrator | 16:27:21.355 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 20s [id=71f8fd10-6320-4e6b-9c05-1de829acbd34] 2025-05-28 16:27:21.566587 | orchestrator | 16:27:21.566 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 21s [id=2bd62613-85f7-4fbf-93c0-6533034bdf0a] 2025-05-28 16:27:21.671861 | orchestrator | 16:27:21.671 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 21s [id=dd7fe891-ac68-4035-a47b-84ffd9bd9188] 2025-05-28 16:27:21.831854 | orchestrator | 16:27:21.831 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 21s [id=a6303f66-8cc7-4368-a4b6-d2722f461922] 2025-05-28 16:27:30.968274 | orchestrator | 16:27:30.967 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-05-28 16:27:30.988764 | orchestrator | 16:27:30.988 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2025-05-28 16:27:31.496321 | orchestrator | 16:27:31.495 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 30s [id=211c19f8-e850-4b13-818b-759ad7081551] 2025-05-28 16:27:31.594172 | orchestrator | 16:27:31.593 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 31s [id=3ce2fd26-75eb-4583-9193-eb8c47ff2d05] 2025-05-28 16:27:31.624651 | orchestrator | 16:27:31.624 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-05-28 16:27:31.627552 | orchestrator | 16:27:31.627 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=6689887429524829717] 2025-05-28 16:27:31.627695 | orchestrator | 16:27:31.627 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-05-28 16:27:31.629697 | orchestrator | 16:27:31.629 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-05-28 16:27:31.633224 | orchestrator | 16:27:31.633 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-05-28 16:27:31.633471 | orchestrator | 16:27:31.633 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-05-28 16:27:31.638386 | orchestrator | 16:27:31.637 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-05-28 16:27:31.639043 | orchestrator | 16:27:31.638 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-05-28 16:27:31.649949 | orchestrator | 16:27:31.649 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-05-28 16:27:31.651162 | orchestrator | 16:27:31.650 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-05-28 16:27:31.669481 | orchestrator | 16:27:31.669 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-05-28 16:27:31.682459 | orchestrator | 16:27:31.682 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-05-28 16:27:36.960313 | orchestrator | 16:27:36.959 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 5s [id=3ce2fd26-75eb-4583-9193-eb8c47ff2d05/705788e5-cc1d-4d40-94fd-fb0e2f22a483] 2025-05-28 16:27:36.969961 | orchestrator | 16:27:36.969 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 5s [id=71f8fd10-6320-4e6b-9c05-1de829acbd34/80beb2a7-6ee1-4917-8c3d-de783739f119] 2025-05-28 16:27:36.984640 | orchestrator | 16:27:36.984 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 5s [id=3ce2fd26-75eb-4583-9193-eb8c47ff2d05/66780fe2-f30a-4cd5-a925-045679329f08] 2025-05-28 16:27:36.997156 | orchestrator | 16:27:36.996 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 5s [id=a6303f66-8cc7-4368-a4b6-d2722f461922/c3ba669b-02ce-4ac9-8d34-f5b1bbc1f6b4] 2025-05-28 16:27:37.017726 | orchestrator | 16:27:37.017 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 5s [id=a6303f66-8cc7-4368-a4b6-d2722f461922/d5a98c17-e489-4dc0-a000-f021a8d49d4d] 2025-05-28 16:27:37.031163 | orchestrator | 16:27:37.030 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 5s [id=71f8fd10-6320-4e6b-9c05-1de829acbd34/3045bd6c-b8ff-4958-af32-f9dea72800f3] 2025-05-28 16:27:37.052154 | orchestrator | 16:27:37.051 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 5s [id=3ce2fd26-75eb-4583-9193-eb8c47ff2d05/da6420c4-4562-42e6-8445-8de06d590092] 2025-05-28 16:27:37.064535 | orchestrator | 16:27:37.064 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 5s [id=71f8fd10-6320-4e6b-9c05-1de829acbd34/1369a208-db5b-4ff3-8df7-c2f8ed8178e8] 2025-05-28 16:27:37.086405 | orchestrator | 16:27:37.085 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 5s [id=a6303f66-8cc7-4368-a4b6-d2722f461922/0444fcd6-ace4-41be-a60f-d61a86741ad0] 2025-05-28 16:27:41.684083 | orchestrator | 16:27:41.683 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-05-28 16:27:51.684922 | orchestrator | 16:27:51.684 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-05-28 16:27:52.246756 | orchestrator | 16:27:52.246 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=42a0965d-b505-456d-bac8-c378b7c6a2c2] 2025-05-28 16:27:52.271512 | orchestrator | 16:27:52.271 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-05-28 16:27:52.271592 | orchestrator | 16:27:52.271 STDOUT terraform: Outputs: 2025-05-28 16:27:52.271629 | orchestrator | 16:27:52.271 STDOUT terraform: manager_address = 2025-05-28 16:27:52.271667 | orchestrator | 16:27:52.271 STDOUT terraform: private_key = 2025-05-28 16:27:52.664773 | orchestrator | ok: Runtime: 0:01:34.243487 2025-05-28 16:27:52.703374 | 2025-05-28 16:27:52.703545 | TASK [Create infrastructure (stable)] 2025-05-28 16:27:53.237557 | orchestrator | skipping: Conditional result was False 2025-05-28 16:27:53.254537 | 2025-05-28 16:27:53.254696 | TASK [Fetch manager address] 2025-05-28 16:27:53.706158 | orchestrator | ok 2025-05-28 16:27:53.713945 | 2025-05-28 16:27:53.714063 | TASK [Set manager_host address] 2025-05-28 16:27:53.795452 | orchestrator | ok 2025-05-28 16:27:53.806148 | 2025-05-28 16:27:53.806278 | LOOP [Update ansible collections] 2025-05-28 16:27:58.556458 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-05-28 16:27:58.556759 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-28 16:27:58.556799 | orchestrator | Starting galaxy collection install process 2025-05-28 16:27:58.556825 | orchestrator | Process install dependency map 2025-05-28 16:27:58.556894 | orchestrator | Starting collection install process 2025-05-28 16:27:58.556917 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons' 2025-05-28 16:27:58.556943 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons 2025-05-28 16:27:58.556968 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-05-28 16:27:58.557021 | orchestrator | ok: Item: commons Runtime: 0:00:04.428040 2025-05-28 16:28:02.103058 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-05-28 16:28:02.103327 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-28 16:28:02.103391 | orchestrator | Starting galaxy collection install process 2025-05-28 16:28:02.103453 | orchestrator | Process install dependency map 2025-05-28 16:28:02.103493 | orchestrator | Starting collection install process 2025-05-28 16:28:02.103527 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed04/.ansible/collections/ansible_collections/osism/services' 2025-05-28 16:28:02.103562 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/services 2025-05-28 16:28:02.103595 | orchestrator | osism.services:999.0.0 was installed successfully 2025-05-28 16:28:02.103650 | orchestrator | ok: Item: services Runtime: 0:00:03.274952 2025-05-28 16:28:02.121710 | 2025-05-28 16:28:02.121887 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-05-28 16:28:12.680662 | orchestrator | ok 2025-05-28 16:28:12.692529 | 2025-05-28 16:28:12.692779 | TASK [Wait a little longer for the manager so that everything is ready] 2025-05-28 16:29:12.739445 | orchestrator | ok 2025-05-28 16:29:12.754162 | 2025-05-28 16:29:12.754320 | TASK [Fetch manager ssh hostkey] 2025-05-28 16:29:14.342339 | orchestrator | Output suppressed because no_log was given 2025-05-28 16:29:14.357722 | 2025-05-28 16:29:14.357934 | TASK [Get ssh keypair from terraform environment] 2025-05-28 16:29:14.895213 | orchestrator | ok: Runtime: 0:00:00.011608 2025-05-28 16:29:14.913435 | 2025-05-28 16:29:14.913692 | TASK [Point out that the following task takes some time and does not give any output] 2025-05-28 16:29:14.966531 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-05-28 16:29:14.976051 | 2025-05-28 16:29:14.976178 | TASK [Run manager part 0] 2025-05-28 16:29:16.658591 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-28 16:29:16.779852 | orchestrator | 2025-05-28 16:29:16.779915 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-05-28 16:29:16.779924 | orchestrator | 2025-05-28 16:29:16.779941 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-05-28 16:29:18.491902 | orchestrator | ok: [testbed-manager] 2025-05-28 16:29:18.491993 | orchestrator | 2025-05-28 16:29:18.492044 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-05-28 16:29:18.492068 | orchestrator | 2025-05-28 16:29:18.492091 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-28 16:29:20.381602 | orchestrator | ok: [testbed-manager] 2025-05-28 16:29:20.381781 | orchestrator | 2025-05-28 16:29:20.381801 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-05-28 16:29:21.064853 | orchestrator | ok: [testbed-manager] 2025-05-28 16:29:21.064947 | orchestrator | 2025-05-28 16:29:21.064963 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-05-28 16:29:21.115935 | orchestrator | skipping: [testbed-manager] 2025-05-28 16:29:21.115975 | orchestrator | 2025-05-28 16:29:21.115983 | orchestrator | TASK [Update package cache] **************************************************** 2025-05-28 16:29:21.141947 | orchestrator | skipping: [testbed-manager] 2025-05-28 16:29:21.141986 | orchestrator | 2025-05-28 16:29:21.141993 | orchestrator | TASK [Install required packages] *********************************************** 2025-05-28 16:29:21.163526 | orchestrator | skipping: [testbed-manager] 2025-05-28 16:29:21.163561 | orchestrator | 2025-05-28 16:29:21.163566 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-05-28 16:29:21.183881 | orchestrator | skipping: [testbed-manager] 2025-05-28 16:29:21.183915 | orchestrator | 2025-05-28 16:29:21.183921 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-05-28 16:29:21.204379 | orchestrator | skipping: [testbed-manager] 2025-05-28 16:29:21.204409 | orchestrator | 2025-05-28 16:29:21.204416 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-05-28 16:29:21.226521 | orchestrator | skipping: [testbed-manager] 2025-05-28 16:29:21.226553 | orchestrator | 2025-05-28 16:29:21.226560 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-05-28 16:29:21.257879 | orchestrator | skipping: [testbed-manager] 2025-05-28 16:29:21.257925 | orchestrator | 2025-05-28 16:29:21.257936 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-05-28 16:29:22.092274 | orchestrator | changed: [testbed-manager] 2025-05-28 16:29:22.092357 | orchestrator | 2025-05-28 16:29:22.092365 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-05-28 16:32:22.925691 | orchestrator | changed: [testbed-manager] 2025-05-28 16:32:22.927245 | orchestrator | 2025-05-28 16:32:22.927274 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-05-28 16:33:39.741469 | orchestrator | changed: [testbed-manager] 2025-05-28 16:33:39.741536 | orchestrator | 2025-05-28 16:33:39.741548 | orchestrator | TASK [Install required packages] *********************************************** 2025-05-28 16:34:04.613601 | orchestrator | changed: [testbed-manager] 2025-05-28 16:34:04.613649 | orchestrator | 2025-05-28 16:34:04.613660 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-05-28 16:34:13.417208 | orchestrator | changed: [testbed-manager] 2025-05-28 16:34:13.417333 | orchestrator | 2025-05-28 16:34:13.417357 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-05-28 16:34:13.472887 | orchestrator | ok: [testbed-manager] 2025-05-28 16:34:13.472965 | orchestrator | 2025-05-28 16:34:13.472973 | orchestrator | TASK [Get current user] ******************************************************** 2025-05-28 16:34:14.297697 | orchestrator | ok: [testbed-manager] 2025-05-28 16:34:14.297790 | orchestrator | 2025-05-28 16:34:14.297808 | orchestrator | TASK [Create venv directory] *************************************************** 2025-05-28 16:34:15.042307 | orchestrator | changed: [testbed-manager] 2025-05-28 16:34:15.042579 | orchestrator | 2025-05-28 16:34:15.042605 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-05-28 16:34:21.545597 | orchestrator | changed: [testbed-manager] 2025-05-28 16:34:21.545694 | orchestrator | 2025-05-28 16:34:21.545743 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-05-28 16:34:27.689789 | orchestrator | changed: [testbed-manager] 2025-05-28 16:34:27.689838 | orchestrator | 2025-05-28 16:34:27.689849 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-05-28 16:34:30.405863 | orchestrator | changed: [testbed-manager] 2025-05-28 16:34:30.405905 | orchestrator | 2025-05-28 16:34:30.405914 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-05-28 16:34:32.186145 | orchestrator | changed: [testbed-manager] 2025-05-28 16:34:32.186184 | orchestrator | 2025-05-28 16:34:32.186192 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-05-28 16:34:33.272623 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-05-28 16:34:33.272705 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-05-28 16:34:33.272718 | orchestrator | 2025-05-28 16:34:33.272729 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-05-28 16:34:33.312666 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-05-28 16:34:33.312716 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-05-28 16:34:33.312722 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-05-28 16:34:33.312727 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-05-28 16:34:39.715815 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-05-28 16:34:39.715905 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-05-28 16:34:39.715919 | orchestrator | 2025-05-28 16:34:39.715932 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-05-28 16:34:40.283012 | orchestrator | changed: [testbed-manager] 2025-05-28 16:34:40.283102 | orchestrator | 2025-05-28 16:34:40.283119 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-05-28 16:36:00.334702 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-05-28 16:36:00.334971 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-05-28 16:36:00.334992 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-05-28 16:36:00.335003 | orchestrator | 2025-05-28 16:36:00.335013 | orchestrator | TASK [Install local collections] *********************************************** 2025-05-28 16:36:02.630634 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-05-28 16:36:02.630676 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-05-28 16:36:02.630682 | orchestrator | 2025-05-28 16:36:02.630687 | orchestrator | PLAY [Create operator user] **************************************************** 2025-05-28 16:36:02.630693 | orchestrator | 2025-05-28 16:36:02.630697 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-28 16:36:04.089883 | orchestrator | ok: [testbed-manager] 2025-05-28 16:36:04.089920 | orchestrator | 2025-05-28 16:36:04.089927 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-05-28 16:36:04.140180 | orchestrator | ok: [testbed-manager] 2025-05-28 16:36:04.140226 | orchestrator | 2025-05-28 16:36:04.140236 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-05-28 16:36:04.229770 | orchestrator | ok: [testbed-manager] 2025-05-28 16:36:04.229851 | orchestrator | 2025-05-28 16:36:04.229861 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-05-28 16:36:05.042625 | orchestrator | changed: [testbed-manager] 2025-05-28 16:36:05.042671 | orchestrator | 2025-05-28 16:36:05.042680 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-05-28 16:36:05.769434 | orchestrator | changed: [testbed-manager] 2025-05-28 16:36:05.769478 | orchestrator | 2025-05-28 16:36:05.769487 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-05-28 16:36:07.173102 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-05-28 16:36:07.173170 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-05-28 16:36:07.173184 | orchestrator | 2025-05-28 16:36:07.173210 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-05-28 16:36:09.102232 | orchestrator | changed: [testbed-manager] 2025-05-28 16:36:09.102352 | orchestrator | 2025-05-28 16:36:09.102369 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-05-28 16:36:10.860779 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-05-28 16:36:10.860946 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-05-28 16:36:10.860963 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-05-28 16:36:10.860975 | orchestrator | 2025-05-28 16:36:10.860987 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-05-28 16:36:11.403808 | orchestrator | changed: [testbed-manager] 2025-05-28 16:36:11.403877 | orchestrator | 2025-05-28 16:36:11.403885 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-05-28 16:36:11.474216 | orchestrator | skipping: [testbed-manager] 2025-05-28 16:36:11.474259 | orchestrator | 2025-05-28 16:36:11.474268 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-05-28 16:36:12.325227 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-28 16:36:12.325309 | orchestrator | changed: [testbed-manager] 2025-05-28 16:36:12.325324 | orchestrator | 2025-05-28 16:36:12.325336 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-05-28 16:36:12.360811 | orchestrator | skipping: [testbed-manager] 2025-05-28 16:36:12.360917 | orchestrator | 2025-05-28 16:36:12.360934 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-05-28 16:36:12.393346 | orchestrator | skipping: [testbed-manager] 2025-05-28 16:36:12.393478 | orchestrator | 2025-05-28 16:36:12.393497 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-05-28 16:36:12.430463 | orchestrator | skipping: [testbed-manager] 2025-05-28 16:36:12.430538 | orchestrator | 2025-05-28 16:36:12.430553 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-05-28 16:36:12.482301 | orchestrator | skipping: [testbed-manager] 2025-05-28 16:36:12.482376 | orchestrator | 2025-05-28 16:36:12.482390 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-05-28 16:36:13.216993 | orchestrator | ok: [testbed-manager] 2025-05-28 16:36:13.217032 | orchestrator | 2025-05-28 16:36:13.217039 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-05-28 16:36:13.217044 | orchestrator | 2025-05-28 16:36:13.217050 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-28 16:36:14.638762 | orchestrator | ok: [testbed-manager] 2025-05-28 16:36:14.638799 | orchestrator | 2025-05-28 16:36:14.638805 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-05-28 16:36:15.617102 | orchestrator | changed: [testbed-manager] 2025-05-28 16:36:15.617199 | orchestrator | 2025-05-28 16:36:15.617214 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 16:36:15.617228 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-05-28 16:36:15.617239 | orchestrator | 2025-05-28 16:36:15.796166 | orchestrator | ok: Runtime: 0:07:00.460516 2025-05-28 16:36:15.806591 | 2025-05-28 16:36:15.806701 | TASK [Point out that the log in on the manager is now possible] 2025-05-28 16:36:15.852402 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-05-28 16:36:15.861749 | 2025-05-28 16:36:15.861952 | TASK [Point out that the following task takes some time and does not give any output] 2025-05-28 16:36:15.901190 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-05-28 16:36:15.911708 | 2025-05-28 16:36:15.911917 | TASK [Run manager part 1 + 2] 2025-05-28 16:36:16.768018 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-28 16:36:16.821777 | orchestrator | 2025-05-28 16:36:16.821906 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-05-28 16:36:16.821926 | orchestrator | 2025-05-28 16:36:16.821956 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-28 16:36:19.838616 | orchestrator | ok: [testbed-manager] 2025-05-28 16:36:19.838715 | orchestrator | 2025-05-28 16:36:19.838770 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-05-28 16:36:19.876197 | orchestrator | skipping: [testbed-manager] 2025-05-28 16:36:19.876270 | orchestrator | 2025-05-28 16:36:19.876291 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-05-28 16:36:19.915775 | orchestrator | ok: [testbed-manager] 2025-05-28 16:36:19.915885 | orchestrator | 2025-05-28 16:36:19.915907 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-05-28 16:36:19.963555 | orchestrator | ok: [testbed-manager] 2025-05-28 16:36:19.963640 | orchestrator | 2025-05-28 16:36:19.963658 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-05-28 16:36:20.077228 | orchestrator | ok: [testbed-manager] 2025-05-28 16:36:20.077317 | orchestrator | 2025-05-28 16:36:20.077337 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-05-28 16:36:20.145999 | orchestrator | ok: [testbed-manager] 2025-05-28 16:36:20.146166 | orchestrator | 2025-05-28 16:36:20.146185 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-05-28 16:36:20.188674 | orchestrator | included: /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-05-28 16:36:20.188754 | orchestrator | 2025-05-28 16:36:20.188770 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-05-28 16:36:20.905428 | orchestrator | ok: [testbed-manager] 2025-05-28 16:36:20.905520 | orchestrator | 2025-05-28 16:36:20.905538 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-05-28 16:36:20.950569 | orchestrator | skipping: [testbed-manager] 2025-05-28 16:36:20.950652 | orchestrator | 2025-05-28 16:36:20.950669 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-05-28 16:36:22.328092 | orchestrator | changed: [testbed-manager] 2025-05-28 16:36:22.328196 | orchestrator | 2025-05-28 16:36:22.328216 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-05-28 16:36:22.910108 | orchestrator | ok: [testbed-manager] 2025-05-28 16:36:22.910198 | orchestrator | 2025-05-28 16:36:22.910214 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-05-28 16:36:24.126211 | orchestrator | changed: [testbed-manager] 2025-05-28 16:36:24.126298 | orchestrator | 2025-05-28 16:36:24.126317 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-05-28 16:36:37.179951 | orchestrator | changed: [testbed-manager] 2025-05-28 16:36:37.180956 | orchestrator | 2025-05-28 16:36:37.180993 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-05-28 16:36:37.851518 | orchestrator | ok: [testbed-manager] 2025-05-28 16:36:37.851572 | orchestrator | 2025-05-28 16:36:37.851582 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-05-28 16:36:37.910683 | orchestrator | skipping: [testbed-manager] 2025-05-28 16:36:37.910732 | orchestrator | 2025-05-28 16:36:37.910739 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-05-28 16:36:38.902491 | orchestrator | changed: [testbed-manager] 2025-05-28 16:36:38.902565 | orchestrator | 2025-05-28 16:36:38.902580 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-05-28 16:36:39.867219 | orchestrator | changed: [testbed-manager] 2025-05-28 16:36:39.867276 | orchestrator | 2025-05-28 16:36:39.867286 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-05-28 16:36:40.457504 | orchestrator | changed: [testbed-manager] 2025-05-28 16:36:40.457579 | orchestrator | 2025-05-28 16:36:40.457594 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-05-28 16:36:40.500868 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-05-28 16:36:40.500986 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-05-28 16:36:40.501003 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-05-28 16:36:40.501015 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-05-28 16:36:45.986470 | orchestrator | changed: [testbed-manager] 2025-05-28 16:36:45.986530 | orchestrator | 2025-05-28 16:36:45.986539 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-05-28 16:36:55.057102 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-05-28 16:36:55.057163 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-05-28 16:36:55.057174 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-05-28 16:36:55.057182 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-05-28 16:36:55.057193 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-05-28 16:36:55.057200 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-05-28 16:36:55.057205 | orchestrator | 2025-05-28 16:36:55.057213 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-05-28 16:36:56.103301 | orchestrator | changed: [testbed-manager] 2025-05-28 16:36:56.103397 | orchestrator | 2025-05-28 16:36:56.103412 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-05-28 16:36:56.144683 | orchestrator | skipping: [testbed-manager] 2025-05-28 16:36:56.144746 | orchestrator | 2025-05-28 16:36:56.144759 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-05-28 16:36:59.225754 | orchestrator | changed: [testbed-manager] 2025-05-28 16:36:59.225875 | orchestrator | 2025-05-28 16:36:59.225891 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-05-28 16:36:59.273979 | orchestrator | skipping: [testbed-manager] 2025-05-28 16:36:59.274061 | orchestrator | 2025-05-28 16:36:59.274070 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-05-28 16:38:33.377262 | orchestrator | changed: [testbed-manager] 2025-05-28 16:38:33.377471 | orchestrator | 2025-05-28 16:38:33.377513 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-05-28 16:38:34.496284 | orchestrator | ok: [testbed-manager] 2025-05-28 16:38:34.496368 | orchestrator | 2025-05-28 16:38:34.496383 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 16:38:34.496396 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-05-28 16:38:34.496407 | orchestrator | 2025-05-28 16:38:35.070788 | orchestrator | ok: Runtime: 0:02:18.390500 2025-05-28 16:38:35.089033 | 2025-05-28 16:38:35.089216 | TASK [Reboot manager] 2025-05-28 16:38:36.628080 | orchestrator | ok: Runtime: 0:00:00.933874 2025-05-28 16:38:36.646671 | 2025-05-28 16:38:36.647083 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-05-28 16:38:51.055457 | orchestrator | ok 2025-05-28 16:38:51.067304 | 2025-05-28 16:38:51.067458 | TASK [Wait a little longer for the manager so that everything is ready] 2025-05-28 16:39:51.115457 | orchestrator | ok 2025-05-28 16:39:51.126203 | 2025-05-28 16:39:51.126342 | TASK [Deploy manager + bootstrap nodes] 2025-05-28 16:39:53.653446 | orchestrator | 2025-05-28 16:39:53.653662 | orchestrator | # DEPLOY MANAGER 2025-05-28 16:39:53.653687 | orchestrator | 2025-05-28 16:39:53.653701 | orchestrator | + set -e 2025-05-28 16:39:53.653715 | orchestrator | + echo 2025-05-28 16:39:53.653730 | orchestrator | + echo '# DEPLOY MANAGER' 2025-05-28 16:39:53.653746 | orchestrator | + echo 2025-05-28 16:39:53.653800 | orchestrator | + cat /opt/manager-vars.sh 2025-05-28 16:39:53.656762 | orchestrator | export NUMBER_OF_NODES=6 2025-05-28 16:39:53.656799 | orchestrator | 2025-05-28 16:39:53.656813 | orchestrator | export CEPH_VERSION=reef 2025-05-28 16:39:53.656827 | orchestrator | export CONFIGURATION_VERSION=main 2025-05-28 16:39:53.656839 | orchestrator | export MANAGER_VERSION=latest 2025-05-28 16:39:53.656861 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-05-28 16:39:53.656872 | orchestrator | 2025-05-28 16:39:53.656891 | orchestrator | export ARA=false 2025-05-28 16:39:53.656902 | orchestrator | export TEMPEST=false 2025-05-28 16:39:53.656919 | orchestrator | export IS_ZUUL=true 2025-05-28 16:39:53.656930 | orchestrator | 2025-05-28 16:39:53.656948 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.180 2025-05-28 16:39:53.656960 | orchestrator | export EXTERNAL_API=false 2025-05-28 16:39:53.656971 | orchestrator | 2025-05-28 16:39:53.656992 | orchestrator | export IMAGE_USER=ubuntu 2025-05-28 16:39:53.657002 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-05-28 16:39:53.657013 | orchestrator | 2025-05-28 16:39:53.657028 | orchestrator | export CEPH_STACK=ceph-ansible 2025-05-28 16:39:53.657046 | orchestrator | 2025-05-28 16:39:53.657057 | orchestrator | + echo 2025-05-28 16:39:53.657068 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-28 16:39:53.657826 | orchestrator | ++ export INTERACTIVE=false 2025-05-28 16:39:53.657843 | orchestrator | ++ INTERACTIVE=false 2025-05-28 16:39:53.657855 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-28 16:39:53.657867 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-28 16:39:53.658049 | orchestrator | + source /opt/manager-vars.sh 2025-05-28 16:39:53.658069 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-28 16:39:53.658082 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-28 16:39:53.658093 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-28 16:39:53.658106 | orchestrator | ++ CEPH_VERSION=reef 2025-05-28 16:39:53.658117 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-28 16:39:53.658128 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-28 16:39:53.658139 | orchestrator | ++ export MANAGER_VERSION=latest 2025-05-28 16:39:53.658150 | orchestrator | ++ MANAGER_VERSION=latest 2025-05-28 16:39:53.658161 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-28 16:39:53.658171 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-28 16:39:53.658187 | orchestrator | ++ export ARA=false 2025-05-28 16:39:53.658206 | orchestrator | ++ ARA=false 2025-05-28 16:39:53.658218 | orchestrator | ++ export TEMPEST=false 2025-05-28 16:39:53.658229 | orchestrator | ++ TEMPEST=false 2025-05-28 16:39:53.658239 | orchestrator | ++ export IS_ZUUL=true 2025-05-28 16:39:53.658250 | orchestrator | ++ IS_ZUUL=true 2025-05-28 16:39:53.658265 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.180 2025-05-28 16:39:53.658276 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.180 2025-05-28 16:39:53.658287 | orchestrator | ++ export EXTERNAL_API=false 2025-05-28 16:39:53.658297 | orchestrator | ++ EXTERNAL_API=false 2025-05-28 16:39:53.658331 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-28 16:39:53.658342 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-28 16:39:53.659523 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-28 16:39:53.659540 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-28 16:39:53.659551 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-28 16:39:53.659562 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-28 16:39:53.659574 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-05-28 16:39:53.709629 | orchestrator | + docker version 2025-05-28 16:39:53.966292 | orchestrator | Client: Docker Engine - Community 2025-05-28 16:39:53.966447 | orchestrator | Version: 27.5.1 2025-05-28 16:39:53.966467 | orchestrator | API version: 1.47 2025-05-28 16:39:53.966479 | orchestrator | Go version: go1.22.11 2025-05-28 16:39:53.966490 | orchestrator | Git commit: 9f9e405 2025-05-28 16:39:53.966505 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-05-28 16:39:53.966517 | orchestrator | OS/Arch: linux/amd64 2025-05-28 16:39:53.966528 | orchestrator | Context: default 2025-05-28 16:39:53.966539 | orchestrator | 2025-05-28 16:39:53.966550 | orchestrator | Server: Docker Engine - Community 2025-05-28 16:39:53.966561 | orchestrator | Engine: 2025-05-28 16:39:53.966572 | orchestrator | Version: 27.5.1 2025-05-28 16:39:53.966584 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-05-28 16:39:53.966595 | orchestrator | Go version: go1.22.11 2025-05-28 16:39:53.966606 | orchestrator | Git commit: 4c9b3b0 2025-05-28 16:39:53.966652 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-05-28 16:39:53.966663 | orchestrator | OS/Arch: linux/amd64 2025-05-28 16:39:53.966674 | orchestrator | Experimental: false 2025-05-28 16:39:53.966685 | orchestrator | containerd: 2025-05-28 16:39:53.966695 | orchestrator | Version: 1.7.27 2025-05-28 16:39:53.966706 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-05-28 16:39:53.966717 | orchestrator | runc: 2025-05-28 16:39:53.966728 | orchestrator | Version: 1.2.5 2025-05-28 16:39:53.966739 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-05-28 16:39:53.966749 | orchestrator | docker-init: 2025-05-28 16:39:53.966760 | orchestrator | Version: 0.19.0 2025-05-28 16:39:53.966771 | orchestrator | GitCommit: de40ad0 2025-05-28 16:39:53.970081 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-05-28 16:39:53.978972 | orchestrator | + set -e 2025-05-28 16:39:53.979040 | orchestrator | + source /opt/manager-vars.sh 2025-05-28 16:39:53.979062 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-28 16:39:53.979081 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-28 16:39:53.979100 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-28 16:39:53.979120 | orchestrator | ++ CEPH_VERSION=reef 2025-05-28 16:39:53.979142 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-28 16:39:53.979161 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-28 16:39:53.979182 | orchestrator | ++ export MANAGER_VERSION=latest 2025-05-28 16:39:53.979201 | orchestrator | ++ MANAGER_VERSION=latest 2025-05-28 16:39:53.979221 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-28 16:39:53.979240 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-28 16:39:53.979260 | orchestrator | ++ export ARA=false 2025-05-28 16:39:53.979281 | orchestrator | ++ ARA=false 2025-05-28 16:39:53.979300 | orchestrator | ++ export TEMPEST=false 2025-05-28 16:39:53.979364 | orchestrator | ++ TEMPEST=false 2025-05-28 16:39:53.979384 | orchestrator | ++ export IS_ZUUL=true 2025-05-28 16:39:53.979402 | orchestrator | ++ IS_ZUUL=true 2025-05-28 16:39:53.979422 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.180 2025-05-28 16:39:53.979441 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.180 2025-05-28 16:39:53.979460 | orchestrator | ++ export EXTERNAL_API=false 2025-05-28 16:39:53.979479 | orchestrator | ++ EXTERNAL_API=false 2025-05-28 16:39:53.979512 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-28 16:39:53.979531 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-28 16:39:53.979551 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-28 16:39:53.979571 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-28 16:39:53.979590 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-28 16:39:53.979609 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-28 16:39:53.979629 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-28 16:39:53.979655 | orchestrator | ++ export INTERACTIVE=false 2025-05-28 16:39:53.979676 | orchestrator | ++ INTERACTIVE=false 2025-05-28 16:39:53.979695 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-28 16:39:53.979715 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-28 16:39:53.979734 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-05-28 16:39:53.979754 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-05-28 16:39:53.979773 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2025-05-28 16:39:53.987202 | orchestrator | + set -e 2025-05-28 16:39:53.987283 | orchestrator | + VERSION=reef 2025-05-28 16:39:53.988507 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2025-05-28 16:39:53.994146 | orchestrator | + [[ -n ceph_version: reef ]] 2025-05-28 16:39:53.994203 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2025-05-28 16:39:54.000051 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2025-05-28 16:39:54.006195 | orchestrator | + set -e 2025-05-28 16:39:54.006766 | orchestrator | + VERSION=2024.2 2025-05-28 16:39:54.007352 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2025-05-28 16:39:54.011639 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2025-05-28 16:39:54.011680 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2025-05-28 16:39:54.016635 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-05-28 16:39:54.017692 | orchestrator | ++ semver latest 7.0.0 2025-05-28 16:39:54.077816 | orchestrator | + [[ -1 -ge 0 ]] 2025-05-28 16:39:54.077900 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-05-28 16:39:54.077911 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-05-28 16:39:54.077917 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-05-28 16:39:54.117684 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-28 16:39:54.119595 | orchestrator | + source /opt/venv/bin/activate 2025-05-28 16:39:54.120404 | orchestrator | ++ deactivate nondestructive 2025-05-28 16:39:54.120426 | orchestrator | ++ '[' -n '' ']' 2025-05-28 16:39:54.120509 | orchestrator | ++ '[' -n '' ']' 2025-05-28 16:39:54.120524 | orchestrator | ++ hash -r 2025-05-28 16:39:54.120680 | orchestrator | ++ '[' -n '' ']' 2025-05-28 16:39:54.120695 | orchestrator | ++ unset VIRTUAL_ENV 2025-05-28 16:39:54.120706 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-05-28 16:39:54.120721 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-05-28 16:39:54.120889 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-05-28 16:39:54.120903 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-05-28 16:39:54.120913 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-05-28 16:39:54.120923 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-05-28 16:39:54.120940 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-28 16:39:54.120955 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-28 16:39:54.120965 | orchestrator | ++ export PATH 2025-05-28 16:39:54.121090 | orchestrator | ++ '[' -n '' ']' 2025-05-28 16:39:54.121134 | orchestrator | ++ '[' -z '' ']' 2025-05-28 16:39:54.121147 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-05-28 16:39:54.121160 | orchestrator | ++ PS1='(venv) ' 2025-05-28 16:39:54.121170 | orchestrator | ++ export PS1 2025-05-28 16:39:54.121223 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-05-28 16:39:54.121235 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-05-28 16:39:54.121444 | orchestrator | ++ hash -r 2025-05-28 16:39:54.121633 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-05-28 16:39:55.439796 | orchestrator | 2025-05-28 16:39:55.439909 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-05-28 16:39:55.439924 | orchestrator | 2025-05-28 16:39:55.439967 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-05-28 16:39:56.041664 | orchestrator | ok: [testbed-manager] 2025-05-28 16:39:56.041777 | orchestrator | 2025-05-28 16:39:56.041792 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-05-28 16:39:57.020304 | orchestrator | changed: [testbed-manager] 2025-05-28 16:39:57.020453 | orchestrator | 2025-05-28 16:39:57.020467 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-05-28 16:39:57.020480 | orchestrator | 2025-05-28 16:39:57.020491 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-28 16:39:59.443815 | orchestrator | ok: [testbed-manager] 2025-05-28 16:39:59.443940 | orchestrator | 2025-05-28 16:39:59.443956 | orchestrator | TASK [Pull images] ************************************************************* 2025-05-28 16:40:04.463575 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/ara-server:1.7.2) 2025-05-28 16:40:04.463717 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/library/mariadb:11.7.2) 2025-05-28 16:40:04.463734 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/ceph-ansible:reef) 2025-05-28 16:40:04.463749 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/inventory-reconciler:latest) 2025-05-28 16:40:04.463760 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/kolla-ansible:2024.2) 2025-05-28 16:40:04.463772 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/library/redis:7.4.3-alpine) 2025-05-28 16:40:04.463783 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/netbox:v4.2.2) 2025-05-28 16:40:04.463794 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism-ansible:latest) 2025-05-28 16:40:04.463805 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism:latest) 2025-05-28 16:40:04.463816 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/library/postgres:16.9-alpine) 2025-05-28 16:40:04.463826 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/library/traefik:v3.4.0) 2025-05-28 16:40:04.463837 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/hashicorp/vault:1.19.3) 2025-05-28 16:40:04.463882 | orchestrator | 2025-05-28 16:40:04.463895 | orchestrator | TASK [Check status] ************************************************************ 2025-05-28 16:41:20.456726 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-05-28 16:41:20.456861 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (119 retries left). 2025-05-28 16:41:20.456878 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (118 retries left). 2025-05-28 16:41:20.456889 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (117 retries left). 2025-05-28 16:41:20.456916 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j513782543571.1539', 'results_file': '/home/dragon/.ansible_async/j513782543571.1539', 'changed': True, 'item': 'registry.osism.tech/osism/ara-server:1.7.2', 'ansible_loop_var': 'item'}) 2025-05-28 16:41:20.456938 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j743097554743.1566', 'results_file': '/home/dragon/.ansible_async/j743097554743.1566', 'changed': True, 'item': 'registry.osism.tech/dockerhub/library/mariadb:11.7.2', 'ansible_loop_var': 'item'}) 2025-05-28 16:41:20.456954 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-05-28 16:41:20.456966 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j958010662624.1591', 'results_file': '/home/dragon/.ansible_async/j958010662624.1591', 'changed': True, 'item': 'registry.osism.tech/osism/ceph-ansible:reef', 'ansible_loop_var': 'item'}) 2025-05-28 16:41:20.456978 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j100919900400.1623', 'results_file': '/home/dragon/.ansible_async/j100919900400.1623', 'changed': True, 'item': 'registry.osism.tech/osism/inventory-reconciler:latest', 'ansible_loop_var': 'item'}) 2025-05-28 16:41:20.456989 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-05-28 16:41:20.457010 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j648740523553.1655', 'results_file': '/home/dragon/.ansible_async/j648740523553.1655', 'changed': True, 'item': 'registry.osism.tech/osism/kolla-ansible:2024.2', 'ansible_loop_var': 'item'}) 2025-05-28 16:41:20.457022 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j283473944775.1687', 'results_file': '/home/dragon/.ansible_async/j283473944775.1687', 'changed': True, 'item': 'registry.osism.tech/dockerhub/library/redis:7.4.3-alpine', 'ansible_loop_var': 'item'}) 2025-05-28 16:41:20.457033 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-05-28 16:41:20.457044 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j442713819179.1726', 'results_file': '/home/dragon/.ansible_async/j442713819179.1726', 'changed': True, 'item': 'registry.osism.tech/osism/netbox:v4.2.2', 'ansible_loop_var': 'item'}) 2025-05-28 16:41:20.457055 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j835844039207.1751', 'results_file': '/home/dragon/.ansible_async/j835844039207.1751', 'changed': True, 'item': 'registry.osism.tech/osism/osism-ansible:latest', 'ansible_loop_var': 'item'}) 2025-05-28 16:41:20.457067 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j755906989858.1783', 'results_file': '/home/dragon/.ansible_async/j755906989858.1783', 'changed': True, 'item': 'registry.osism.tech/osism/osism:latest', 'ansible_loop_var': 'item'}) 2025-05-28 16:41:20.457078 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j356415686883.1818', 'results_file': '/home/dragon/.ansible_async/j356415686883.1818', 'changed': True, 'item': 'registry.osism.tech/dockerhub/library/postgres:16.9-alpine', 'ansible_loop_var': 'item'}) 2025-05-28 16:41:20.457089 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j122375758523.1857', 'results_file': '/home/dragon/.ansible_async/j122375758523.1857', 'changed': True, 'item': 'registry.osism.tech/dockerhub/library/traefik:v3.4.0', 'ansible_loop_var': 'item'}) 2025-05-28 16:41:20.457125 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j469278745723.1884', 'results_file': '/home/dragon/.ansible_async/j469278745723.1884', 'changed': True, 'item': 'registry.osism.tech/dockerhub/hashicorp/vault:1.19.3', 'ansible_loop_var': 'item'}) 2025-05-28 16:41:20.457137 | orchestrator | 2025-05-28 16:41:20.457149 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-05-28 16:41:20.507102 | orchestrator | ok: [testbed-manager] 2025-05-28 16:41:20.507130 | orchestrator | 2025-05-28 16:41:20.507141 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-05-28 16:41:20.957330 | orchestrator | changed: [testbed-manager] 2025-05-28 16:41:20.957431 | orchestrator | 2025-05-28 16:41:20.957445 | orchestrator | TASK [Add netbox_postgres_volume_type parameter] ******************************* 2025-05-28 16:41:21.321147 | orchestrator | changed: [testbed-manager] 2025-05-28 16:41:21.321242 | orchestrator | 2025-05-28 16:41:21.321260 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-05-28 16:41:21.658739 | orchestrator | changed: [testbed-manager] 2025-05-28 16:41:21.658841 | orchestrator | 2025-05-28 16:41:21.658855 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-05-28 16:41:21.708170 | orchestrator | skipping: [testbed-manager] 2025-05-28 16:41:21.708256 | orchestrator | 2025-05-28 16:41:21.708268 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-05-28 16:41:22.008139 | orchestrator | ok: [testbed-manager] 2025-05-28 16:41:22.008270 | orchestrator | 2025-05-28 16:41:22.008286 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-05-28 16:41:22.106103 | orchestrator | skipping: [testbed-manager] 2025-05-28 16:41:22.106205 | orchestrator | 2025-05-28 16:41:22.106217 | orchestrator | PLAY [Apply role traefik & netbox] ********************************************* 2025-05-28 16:41:22.106226 | orchestrator | 2025-05-28 16:41:22.106235 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-28 16:41:23.981306 | orchestrator | ok: [testbed-manager] 2025-05-28 16:41:23.981412 | orchestrator | 2025-05-28 16:41:23.981428 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-05-28 16:41:24.104412 | orchestrator | included: osism.services.traefik for testbed-manager 2025-05-28 16:41:24.104575 | orchestrator | 2025-05-28 16:41:24.104591 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-05-28 16:41:24.161785 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-05-28 16:41:24.161884 | orchestrator | 2025-05-28 16:41:24.161899 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-05-28 16:41:25.255371 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-05-28 16:41:25.255586 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-05-28 16:41:25.255616 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-05-28 16:41:25.255640 | orchestrator | 2025-05-28 16:41:25.255659 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-05-28 16:41:27.057369 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-05-28 16:41:27.057547 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-05-28 16:41:27.057564 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-05-28 16:41:27.057578 | orchestrator | 2025-05-28 16:41:27.057619 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-05-28 16:41:27.659396 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-28 16:41:27.659567 | orchestrator | changed: [testbed-manager] 2025-05-28 16:41:27.659598 | orchestrator | 2025-05-28 16:41:27.659611 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-05-28 16:41:28.312821 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-28 16:41:28.312967 | orchestrator | changed: [testbed-manager] 2025-05-28 16:41:28.313078 | orchestrator | 2025-05-28 16:41:28.313101 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-05-28 16:41:28.371564 | orchestrator | skipping: [testbed-manager] 2025-05-28 16:41:28.371682 | orchestrator | 2025-05-28 16:41:28.371695 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-05-28 16:41:28.733093 | orchestrator | ok: [testbed-manager] 2025-05-28 16:41:28.733212 | orchestrator | 2025-05-28 16:41:28.733228 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-05-28 16:41:28.802828 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-05-28 16:41:28.802909 | orchestrator | 2025-05-28 16:41:28.802923 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-05-28 16:41:29.989445 | orchestrator | changed: [testbed-manager] 2025-05-28 16:41:29.989615 | orchestrator | 2025-05-28 16:41:29.989632 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-05-28 16:41:30.781345 | orchestrator | changed: [testbed-manager] 2025-05-28 16:41:30.781460 | orchestrator | 2025-05-28 16:41:30.781476 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-05-28 16:41:33.385844 | orchestrator | changed: [testbed-manager] 2025-05-28 16:41:33.385960 | orchestrator | 2025-05-28 16:41:33.385974 | orchestrator | TASK [Apply netbox role] ******************************************************* 2025-05-28 16:41:33.517295 | orchestrator | included: osism.services.netbox for testbed-manager 2025-05-28 16:41:33.517383 | orchestrator | 2025-05-28 16:41:33.517396 | orchestrator | TASK [osism.services.netbox : Include install tasks] *************************** 2025-05-28 16:41:33.580258 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/install-Debian-family.yml for testbed-manager 2025-05-28 16:41:33.580296 | orchestrator | 2025-05-28 16:41:33.580309 | orchestrator | TASK [osism.services.netbox : Install required packages] *********************** 2025-05-28 16:41:36.190760 | orchestrator | ok: [testbed-manager] 2025-05-28 16:41:36.190890 | orchestrator | 2025-05-28 16:41:36.190907 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-05-28 16:41:36.300618 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config.yml for testbed-manager 2025-05-28 16:41:36.300702 | orchestrator | 2025-05-28 16:41:36.300716 | orchestrator | TASK [osism.services.netbox : Create required directories] ********************* 2025-05-28 16:41:37.447098 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox) 2025-05-28 16:41:37.447213 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration) 2025-05-28 16:41:37.447228 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/secrets) 2025-05-28 16:41:37.447241 | orchestrator | 2025-05-28 16:41:37.447253 | orchestrator | TASK [osism.services.netbox : Include postgres config tasks] ******************* 2025-05-28 16:41:37.522211 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-postgres.yml for testbed-manager 2025-05-28 16:41:37.522336 | orchestrator | 2025-05-28 16:41:37.522361 | orchestrator | TASK [osism.services.netbox : Copy postgres environment files] ***************** 2025-05-28 16:41:38.187030 | orchestrator | changed: [testbed-manager] => (item=postgres) 2025-05-28 16:41:38.187149 | orchestrator | 2025-05-28 16:41:38.187165 | orchestrator | TASK [osism.services.netbox : Copy postgres configuration file] **************** 2025-05-28 16:41:38.859990 | orchestrator | changed: [testbed-manager] 2025-05-28 16:41:38.860097 | orchestrator | 2025-05-28 16:41:38.860112 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-05-28 16:41:39.504635 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-28 16:41:39.504766 | orchestrator | changed: [testbed-manager] 2025-05-28 16:41:39.504783 | orchestrator | 2025-05-28 16:41:39.504795 | orchestrator | TASK [osism.services.netbox : Create docker-entrypoint-initdb.d directory] ***** 2025-05-28 16:41:39.928674 | orchestrator | changed: [testbed-manager] 2025-05-28 16:41:39.928750 | orchestrator | 2025-05-28 16:41:39.928765 | orchestrator | TASK [osism.services.netbox : Check if init.sql file exists] ******************* 2025-05-28 16:41:40.304079 | orchestrator | ok: [testbed-manager] 2025-05-28 16:41:40.304197 | orchestrator | 2025-05-28 16:41:40.304250 | orchestrator | TASK [osism.services.netbox : Copy init.sql file] ****************************** 2025-05-28 16:41:40.361068 | orchestrator | skipping: [testbed-manager] 2025-05-28 16:41:40.361203 | orchestrator | 2025-05-28 16:41:40.361218 | orchestrator | TASK [osism.services.netbox : Create init-netbox-database.sh script] *********** 2025-05-28 16:41:41.014315 | orchestrator | changed: [testbed-manager] 2025-05-28 16:41:41.014370 | orchestrator | 2025-05-28 16:41:41.014383 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-05-28 16:41:41.097184 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-netbox.yml for testbed-manager 2025-05-28 16:41:41.097283 | orchestrator | 2025-05-28 16:41:41.097296 | orchestrator | TASK [osism.services.netbox : Create directories required by netbox] *********** 2025-05-28 16:41:41.873357 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/initializers) 2025-05-28 16:41:41.873451 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/startup-scripts) 2025-05-28 16:41:41.873457 | orchestrator | 2025-05-28 16:41:41.873487 | orchestrator | TASK [osism.services.netbox : Copy netbox environment files] ******************* 2025-05-28 16:41:42.558261 | orchestrator | changed: [testbed-manager] => (item=netbox) 2025-05-28 16:41:42.558361 | orchestrator | 2025-05-28 16:41:42.558375 | orchestrator | TASK [osism.services.netbox : Copy netbox configuration file] ****************** 2025-05-28 16:41:43.253369 | orchestrator | changed: [testbed-manager] 2025-05-28 16:41:43.253486 | orchestrator | 2025-05-28 16:41:43.253541 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (<= 1.26)] **** 2025-05-28 16:41:43.302302 | orchestrator | skipping: [testbed-manager] 2025-05-28 16:41:43.302406 | orchestrator | 2025-05-28 16:41:43.302421 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (> 1.26)] ***** 2025-05-28 16:41:43.947051 | orchestrator | changed: [testbed-manager] 2025-05-28 16:41:43.947166 | orchestrator | 2025-05-28 16:41:43.947181 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-05-28 16:41:45.895602 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-28 16:41:45.895739 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-28 16:41:45.895755 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-28 16:41:45.895767 | orchestrator | changed: [testbed-manager] 2025-05-28 16:41:45.895780 | orchestrator | 2025-05-28 16:41:45.895791 | orchestrator | TASK [osism.services.netbox : Deploy initializers for netbox] ****************** 2025-05-28 16:41:51.925449 | orchestrator | changed: [testbed-manager] => (item=custom_fields) 2025-05-28 16:41:51.925647 | orchestrator | changed: [testbed-manager] => (item=device_roles) 2025-05-28 16:41:51.925667 | orchestrator | changed: [testbed-manager] => (item=device_types) 2025-05-28 16:41:51.925679 | orchestrator | changed: [testbed-manager] => (item=groups) 2025-05-28 16:41:51.925690 | orchestrator | changed: [testbed-manager] => (item=manufacturers) 2025-05-28 16:41:51.925701 | orchestrator | changed: [testbed-manager] => (item=object_permissions) 2025-05-28 16:41:51.925713 | orchestrator | changed: [testbed-manager] => (item=prefix_vlan_roles) 2025-05-28 16:41:51.925724 | orchestrator | changed: [testbed-manager] => (item=sites) 2025-05-28 16:41:51.925734 | orchestrator | changed: [testbed-manager] => (item=tags) 2025-05-28 16:41:51.925745 | orchestrator | changed: [testbed-manager] => (item=users) 2025-05-28 16:41:51.925756 | orchestrator | 2025-05-28 16:41:51.925769 | orchestrator | TASK [osism.services.netbox : Deploy startup scripts for netbox] *************** 2025-05-28 16:41:52.581278 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/files/startup-scripts/270_tags.py) 2025-05-28 16:41:52.581397 | orchestrator | 2025-05-28 16:41:52.581413 | orchestrator | TASK [osism.services.netbox : Include service tasks] *************************** 2025-05-28 16:41:52.663472 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/service.yml for testbed-manager 2025-05-28 16:41:52.663610 | orchestrator | 2025-05-28 16:41:52.663625 | orchestrator | TASK [osism.services.netbox : Copy netbox systemd unit file] ******************* 2025-05-28 16:41:53.373248 | orchestrator | changed: [testbed-manager] 2025-05-28 16:41:53.373371 | orchestrator | 2025-05-28 16:41:53.373387 | orchestrator | TASK [osism.services.netbox : Create traefik external network] ***************** 2025-05-28 16:41:53.987299 | orchestrator | ok: [testbed-manager] 2025-05-28 16:41:53.987419 | orchestrator | 2025-05-28 16:41:53.987436 | orchestrator | TASK [osism.services.netbox : Copy docker-compose.yml file] ******************** 2025-05-28 16:41:54.754463 | orchestrator | changed: [testbed-manager] 2025-05-28 16:41:54.754673 | orchestrator | 2025-05-28 16:41:54.754695 | orchestrator | TASK [osism.services.netbox : Pull container images] *************************** 2025-05-28 16:41:57.138581 | orchestrator | ok: [testbed-manager] 2025-05-28 16:41:57.138720 | orchestrator | 2025-05-28 16:41:57.138737 | orchestrator | TASK [osism.services.netbox : Stop and disable old service docker-compose@netbox] *** 2025-05-28 16:41:58.144723 | orchestrator | ok: [testbed-manager] 2025-05-28 16:41:58.144873 | orchestrator | 2025-05-28 16:41:58.144900 | orchestrator | TASK [osism.services.netbox : Manage netbox service] *************************** 2025-05-28 16:42:20.261630 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage netbox service (10 retries left). 2025-05-28 16:42:20.261756 | orchestrator | ok: [testbed-manager] 2025-05-28 16:42:20.261772 | orchestrator | 2025-05-28 16:42:20.261785 | orchestrator | TASK [osism.services.netbox : Register that netbox service was started] ******** 2025-05-28 16:42:20.300454 | orchestrator | skipping: [testbed-manager] 2025-05-28 16:42:20.300507 | orchestrator | 2025-05-28 16:42:20.300523 | orchestrator | TASK [osism.services.netbox : Flush handlers] ********************************** 2025-05-28 16:42:20.300536 | orchestrator | 2025-05-28 16:42:20.300549 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-05-28 16:42:20.328345 | orchestrator | skipping: [testbed-manager] 2025-05-28 16:42:20.328412 | orchestrator | 2025-05-28 16:42:20.328430 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-05-28 16:42:20.385812 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/restart-service.yml for testbed-manager 2025-05-28 16:42:20.385849 | orchestrator | 2025-05-28 16:42:20.385866 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres container] ****** 2025-05-28 16:42:21.161789 | orchestrator | ok: [testbed-manager] 2025-05-28 16:42:21.161894 | orchestrator | 2025-05-28 16:42:21.161909 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres container version fact] *** 2025-05-28 16:42:21.232851 | orchestrator | ok: [testbed-manager] 2025-05-28 16:42:21.232905 | orchestrator | 2025-05-28 16:42:21.232919 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres container] *** 2025-05-28 16:42:21.294280 | orchestrator | ok: [testbed-manager] => { 2025-05-28 16:42:21.294358 | orchestrator | "msg": "The major version of the running postgres container is 16" 2025-05-28 16:42:21.294372 | orchestrator | } 2025-05-28 16:42:21.294384 | orchestrator | 2025-05-28 16:42:21.294396 | orchestrator | RUNNING HANDLER [osism.services.netbox : Pull postgres image] ****************** 2025-05-28 16:42:21.864991 | orchestrator | ok: [testbed-manager] 2025-05-28 16:42:21.865102 | orchestrator | 2025-05-28 16:42:21.865120 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres image] ********** 2025-05-28 16:42:22.636718 | orchestrator | ok: [testbed-manager] 2025-05-28 16:42:22.636832 | orchestrator | 2025-05-28 16:42:22.636848 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres image version fact] ****** 2025-05-28 16:42:22.703326 | orchestrator | ok: [testbed-manager] 2025-05-28 16:42:22.703358 | orchestrator | 2025-05-28 16:42:22.703372 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres image] *** 2025-05-28 16:42:22.756603 | orchestrator | ok: [testbed-manager] => { 2025-05-28 16:42:22.756642 | orchestrator | "msg": "The major version of the postgres image is 16" 2025-05-28 16:42:22.756654 | orchestrator | } 2025-05-28 16:42:22.756666 | orchestrator | 2025-05-28 16:42:22.756678 | orchestrator | RUNNING HANDLER [osism.services.netbox : Stop netbox service] ****************** 2025-05-28 16:42:22.807614 | orchestrator | skipping: [testbed-manager] 2025-05-28 16:42:22.807681 | orchestrator | 2025-05-28 16:42:22.807695 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to stop] ****** 2025-05-28 16:42:22.862801 | orchestrator | skipping: [testbed-manager] 2025-05-28 16:42:22.862827 | orchestrator | 2025-05-28 16:42:22.862839 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres volume] ********* 2025-05-28 16:42:22.906146 | orchestrator | skipping: [testbed-manager] 2025-05-28 16:42:22.906183 | orchestrator | 2025-05-28 16:42:22.906195 | orchestrator | RUNNING HANDLER [osism.services.netbox : Upgrade postgres database] ************ 2025-05-28 16:42:22.959099 | orchestrator | skipping: [testbed-manager] 2025-05-28 16:42:22.959159 | orchestrator | 2025-05-28 16:42:22.959172 | orchestrator | RUNNING HANDLER [osism.services.netbox : Remove netbox-pgautoupgrade container] *** 2025-05-28 16:42:23.101744 | orchestrator | skipping: [testbed-manager] 2025-05-28 16:42:23.101834 | orchestrator | 2025-05-28 16:42:23.101849 | orchestrator | RUNNING HANDLER [osism.services.netbox : Start netbox service] ***************** 2025-05-28 16:42:23.166866 | orchestrator | skipping: [testbed-manager] 2025-05-28 16:42:23.166950 | orchestrator | 2025-05-28 16:42:23.166966 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-05-28 16:42:24.371352 | orchestrator | changed: [testbed-manager] 2025-05-28 16:42:24.371474 | orchestrator | 2025-05-28 16:42:24.371489 | orchestrator | RUNNING HANDLER [osism.services.netbox : Register that netbox service was started] *** 2025-05-28 16:42:24.423639 | orchestrator | ok: [testbed-manager] 2025-05-28 16:42:24.423758 | orchestrator | 2025-05-28 16:42:24.423775 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to start] ***** 2025-05-28 16:43:24.469197 | orchestrator | Pausing for 60 seconds 2025-05-28 16:43:24.469336 | orchestrator | changed: [testbed-manager] 2025-05-28 16:43:24.469350 | orchestrator | 2025-05-28 16:43:24.469361 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for an healthy netbox service] *** 2025-05-28 16:43:24.519536 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/wait-for-healthy-service.yml for testbed-manager 2025-05-28 16:43:24.519685 | orchestrator | 2025-05-28 16:43:24.519700 | orchestrator | RUNNING HANDLER [osism.services.netbox : Check that all containers are in a good state] *** 2025-05-28 16:46:54.034370 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (60 retries left). 2025-05-28 16:46:54.034510 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (59 retries left). 2025-05-28 16:46:54.034525 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (58 retries left). 2025-05-28 16:46:54.034536 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (57 retries left). 2025-05-28 16:46:54.034548 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (56 retries left). 2025-05-28 16:46:54.034559 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (55 retries left). 2025-05-28 16:46:54.034569 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (54 retries left). 2025-05-28 16:46:54.034580 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (53 retries left). 2025-05-28 16:46:54.034591 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (52 retries left). 2025-05-28 16:46:54.034601 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (51 retries left). 2025-05-28 16:46:54.034612 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (50 retries left). 2025-05-28 16:46:54.034623 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (49 retries left). 2025-05-28 16:46:54.034633 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (48 retries left). 2025-05-28 16:46:54.034644 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (47 retries left). 2025-05-28 16:46:54.034655 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (46 retries left). 2025-05-28 16:46:54.034665 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (45 retries left). 2025-05-28 16:46:54.034676 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (44 retries left). 2025-05-28 16:46:54.034709 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (43 retries left). 2025-05-28 16:46:54.034720 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (42 retries left). 2025-05-28 16:46:54.034731 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (41 retries left). 2025-05-28 16:46:54.034768 | orchestrator | changed: [testbed-manager] 2025-05-28 16:46:54.034782 | orchestrator | 2025-05-28 16:46:54.034795 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-05-28 16:46:54.034805 | orchestrator | 2025-05-28 16:46:54.034817 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-28 16:46:56.168604 | orchestrator | ok: [testbed-manager] 2025-05-28 16:46:56.168705 | orchestrator | 2025-05-28 16:46:56.168720 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-05-28 16:46:56.286960 | orchestrator | included: osism.services.manager for testbed-manager 2025-05-28 16:46:56.287031 | orchestrator | 2025-05-28 16:46:56.287045 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-05-28 16:46:56.357361 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-05-28 16:46:56.357430 | orchestrator | 2025-05-28 16:46:56.357443 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-05-28 16:46:58.195361 | orchestrator | ok: [testbed-manager] 2025-05-28 16:46:58.195437 | orchestrator | 2025-05-28 16:46:58.195450 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-05-28 16:46:58.253963 | orchestrator | ok: [testbed-manager] 2025-05-28 16:46:58.254068 | orchestrator | 2025-05-28 16:46:58.254085 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-05-28 16:46:58.353636 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-05-28 16:46:58.353696 | orchestrator | 2025-05-28 16:46:58.353710 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-05-28 16:47:01.189073 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-05-28 16:47:01.189206 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-05-28 16:47:01.189223 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-05-28 16:47:01.189235 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-05-28 16:47:01.189247 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-05-28 16:47:01.189259 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-05-28 16:47:01.189270 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-05-28 16:47:01.189281 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-05-28 16:47:01.189298 | orchestrator | 2025-05-28 16:47:01.189310 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-05-28 16:47:01.830501 | orchestrator | changed: [testbed-manager] 2025-05-28 16:47:01.830596 | orchestrator | 2025-05-28 16:47:01.830604 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-05-28 16:47:02.484119 | orchestrator | changed: [testbed-manager] 2025-05-28 16:47:02.484255 | orchestrator | 2025-05-28 16:47:02.484272 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-05-28 16:47:02.569254 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-05-28 16:47:02.569366 | orchestrator | 2025-05-28 16:47:02.569381 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-05-28 16:47:03.776753 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-05-28 16:47:03.776873 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-05-28 16:47:03.776888 | orchestrator | 2025-05-28 16:47:03.776901 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-05-28 16:47:04.426688 | orchestrator | changed: [testbed-manager] 2025-05-28 16:47:04.426819 | orchestrator | 2025-05-28 16:47:04.426835 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-05-28 16:47:04.484897 | orchestrator | skipping: [testbed-manager] 2025-05-28 16:47:04.485036 | orchestrator | 2025-05-28 16:47:04.485050 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-05-28 16:47:04.555376 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-05-28 16:47:04.555487 | orchestrator | 2025-05-28 16:47:04.555502 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-05-28 16:47:05.970311 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-28 16:47:05.970422 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-28 16:47:05.970435 | orchestrator | changed: [testbed-manager] 2025-05-28 16:47:05.970448 | orchestrator | 2025-05-28 16:47:05.970460 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-05-28 16:47:06.608837 | orchestrator | changed: [testbed-manager] 2025-05-28 16:47:06.608968 | orchestrator | 2025-05-28 16:47:06.608990 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-05-28 16:47:06.699620 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-netbox.yml for testbed-manager 2025-05-28 16:47:06.699731 | orchestrator | 2025-05-28 16:47:06.699744 | orchestrator | TASK [osism.services.manager : Copy secret files] ****************************** 2025-05-28 16:47:07.937879 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-28 16:47:07.938088 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-28 16:47:07.938107 | orchestrator | changed: [testbed-manager] 2025-05-28 16:47:07.938121 | orchestrator | 2025-05-28 16:47:07.938133 | orchestrator | TASK [osism.services.manager : Copy netbox environment file] ******************* 2025-05-28 16:47:08.561039 | orchestrator | changed: [testbed-manager] 2025-05-28 16:47:08.561158 | orchestrator | 2025-05-28 16:47:08.561173 | orchestrator | TASK [osism.services.manager : Copy inventory-reconciler environment file] ***** 2025-05-28 16:47:09.185481 | orchestrator | changed: [testbed-manager] 2025-05-28 16:47:09.185600 | orchestrator | 2025-05-28 16:47:09.185616 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-05-28 16:47:09.335663 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-05-28 16:47:09.335787 | orchestrator | 2025-05-28 16:47:09.335802 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-05-28 16:47:09.864213 | orchestrator | changed: [testbed-manager] 2025-05-28 16:47:09.864351 | orchestrator | 2025-05-28 16:47:09.864367 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-05-28 16:47:10.278158 | orchestrator | changed: [testbed-manager] 2025-05-28 16:47:10.278287 | orchestrator | 2025-05-28 16:47:10.278303 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-05-28 16:47:11.529610 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-05-28 16:47:11.529734 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-05-28 16:47:11.529749 | orchestrator | 2025-05-28 16:47:11.529762 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-05-28 16:47:12.172853 | orchestrator | changed: [testbed-manager] 2025-05-28 16:47:12.173072 | orchestrator | 2025-05-28 16:47:12.173094 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-05-28 16:47:12.568221 | orchestrator | ok: [testbed-manager] 2025-05-28 16:47:12.568341 | orchestrator | 2025-05-28 16:47:12.568354 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-05-28 16:47:12.905966 | orchestrator | changed: [testbed-manager] 2025-05-28 16:47:12.906143 | orchestrator | 2025-05-28 16:47:12.906161 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-05-28 16:47:12.959499 | orchestrator | skipping: [testbed-manager] 2025-05-28 16:47:12.959631 | orchestrator | 2025-05-28 16:47:12.959646 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-05-28 16:47:13.049765 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-05-28 16:47:13.049882 | orchestrator | 2025-05-28 16:47:13.049897 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-05-28 16:47:13.097794 | orchestrator | ok: [testbed-manager] 2025-05-28 16:47:13.097991 | orchestrator | 2025-05-28 16:47:13.098107 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-05-28 16:47:15.165421 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-05-28 16:47:15.165584 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-05-28 16:47:15.165601 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-05-28 16:47:15.165612 | orchestrator | 2025-05-28 16:47:15.165645 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-05-28 16:47:15.914110 | orchestrator | changed: [testbed-manager] 2025-05-28 16:47:15.914233 | orchestrator | 2025-05-28 16:47:15.914249 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-05-28 16:47:16.662975 | orchestrator | changed: [testbed-manager] 2025-05-28 16:47:16.663075 | orchestrator | 2025-05-28 16:47:16.663084 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-05-28 16:47:17.381605 | orchestrator | changed: [testbed-manager] 2025-05-28 16:47:17.381758 | orchestrator | 2025-05-28 16:47:17.381787 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-05-28 16:47:17.457393 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-05-28 16:47:17.457494 | orchestrator | 2025-05-28 16:47:17.457506 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-05-28 16:47:17.512303 | orchestrator | ok: [testbed-manager] 2025-05-28 16:47:17.512402 | orchestrator | 2025-05-28 16:47:17.512416 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-05-28 16:47:18.245825 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-05-28 16:47:18.246009 | orchestrator | 2025-05-28 16:47:18.246089 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-05-28 16:47:18.332601 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-05-28 16:47:18.332727 | orchestrator | 2025-05-28 16:47:18.332759 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-05-28 16:47:19.078705 | orchestrator | changed: [testbed-manager] 2025-05-28 16:47:19.078832 | orchestrator | 2025-05-28 16:47:19.078848 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-05-28 16:47:19.709076 | orchestrator | ok: [testbed-manager] 2025-05-28 16:47:19.709145 | orchestrator | 2025-05-28 16:47:19.709159 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-05-28 16:47:19.761804 | orchestrator | skipping: [testbed-manager] 2025-05-28 16:47:19.761868 | orchestrator | 2025-05-28 16:47:19.761884 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-05-28 16:47:19.820253 | orchestrator | ok: [testbed-manager] 2025-05-28 16:47:19.820313 | orchestrator | 2025-05-28 16:47:19.820329 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-05-28 16:47:20.690364 | orchestrator | changed: [testbed-manager] 2025-05-28 16:47:20.690488 | orchestrator | 2025-05-28 16:47:20.690502 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-05-28 16:48:07.632461 | orchestrator | changed: [testbed-manager] 2025-05-28 16:48:07.632595 | orchestrator | 2025-05-28 16:48:07.632614 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-05-28 16:48:08.354789 | orchestrator | ok: [testbed-manager] 2025-05-28 16:48:08.354901 | orchestrator | 2025-05-28 16:48:08.354917 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-05-28 16:48:10.762915 | orchestrator | changed: [testbed-manager] 2025-05-28 16:48:10.763086 | orchestrator | 2025-05-28 16:48:10.763103 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-05-28 16:48:10.832786 | orchestrator | ok: [testbed-manager] 2025-05-28 16:48:10.832916 | orchestrator | 2025-05-28 16:48:10.832931 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-05-28 16:48:10.832944 | orchestrator | 2025-05-28 16:48:10.832956 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-05-28 16:48:10.891233 | orchestrator | skipping: [testbed-manager] 2025-05-28 16:48:10.891340 | orchestrator | 2025-05-28 16:48:10.891355 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-05-28 16:49:10.944671 | orchestrator | Pausing for 60 seconds 2025-05-28 16:49:10.944849 | orchestrator | changed: [testbed-manager] 2025-05-28 16:49:10.944865 | orchestrator | 2025-05-28 16:49:10.944879 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-05-28 16:49:15.512986 | orchestrator | changed: [testbed-manager] 2025-05-28 16:49:15.513148 | orchestrator | 2025-05-28 16:49:15.513166 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-05-28 16:49:57.126633 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-05-28 16:49:57.126736 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-05-28 16:49:57.126744 | orchestrator | changed: [testbed-manager] 2025-05-28 16:49:57.126752 | orchestrator | 2025-05-28 16:49:57.126759 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-05-28 16:50:06.538509 | orchestrator | changed: [testbed-manager] 2025-05-28 16:50:06.538658 | orchestrator | 2025-05-28 16:50:06.538676 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-05-28 16:50:06.642947 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-05-28 16:50:06.643153 | orchestrator | 2025-05-28 16:50:06.643174 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-05-28 16:50:06.643187 | orchestrator | 2025-05-28 16:50:06.643199 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-05-28 16:50:06.696580 | orchestrator | skipping: [testbed-manager] 2025-05-28 16:50:06.696705 | orchestrator | 2025-05-28 16:50:06.696724 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 16:50:06.696749 | orchestrator | testbed-manager : ok=111 changed=59 unreachable=0 failed=0 skipped=18 rescued=0 ignored=0 2025-05-28 16:50:06.696769 | orchestrator | 2025-05-28 16:50:06.811558 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-28 16:50:06.811685 | orchestrator | + deactivate 2025-05-28 16:50:06.811701 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-05-28 16:50:06.811716 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-28 16:50:06.811727 | orchestrator | + export PATH 2025-05-28 16:50:06.811739 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-05-28 16:50:06.811750 | orchestrator | + '[' -n '' ']' 2025-05-28 16:50:06.811761 | orchestrator | + hash -r 2025-05-28 16:50:06.811772 | orchestrator | + '[' -n '' ']' 2025-05-28 16:50:06.811783 | orchestrator | + unset VIRTUAL_ENV 2025-05-28 16:50:06.811795 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-05-28 16:50:06.811807 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-05-28 16:50:06.811817 | orchestrator | + unset -f deactivate 2025-05-28 16:50:06.811829 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-05-28 16:50:06.821324 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-05-28 16:50:06.821421 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-05-28 16:50:06.821447 | orchestrator | + local max_attempts=60 2025-05-28 16:50:06.821461 | orchestrator | + local name=ceph-ansible 2025-05-28 16:50:06.821472 | orchestrator | + local attempt_num=1 2025-05-28 16:50:06.822190 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-05-28 16:50:06.857952 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-28 16:50:06.858153 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-05-28 16:50:06.858170 | orchestrator | + local max_attempts=60 2025-05-28 16:50:06.858182 | orchestrator | + local name=kolla-ansible 2025-05-28 16:50:06.858194 | orchestrator | + local attempt_num=1 2025-05-28 16:50:06.858703 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-05-28 16:50:06.898523 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-28 16:50:06.898634 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-05-28 16:50:06.898647 | orchestrator | + local max_attempts=60 2025-05-28 16:50:06.898658 | orchestrator | + local name=osism-ansible 2025-05-28 16:50:06.898669 | orchestrator | + local attempt_num=1 2025-05-28 16:50:06.898926 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-05-28 16:50:06.935154 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-28 16:50:06.935259 | orchestrator | + [[ true == \t\r\u\e ]] 2025-05-28 16:50:06.935301 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-05-28 16:50:07.674444 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-05-28 16:50:07.865633 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-05-28 16:50:07.865777 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-05-28 16:50:07.865801 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-05-28 16:50:07.865822 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-05-28 16:50:07.865842 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-05-28 16:50:07.865854 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2025-05-28 16:50:07.865865 | orchestrator | manager-conductor-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" conductor About a minute ago Up About a minute (healthy) 2025-05-28 16:50:07.865876 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2025-05-28 16:50:07.865887 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 52 seconds (healthy) 2025-05-28 16:50:07.865898 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2025-05-28 16:50:07.865909 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.7.2 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-05-28 16:50:07.865919 | orchestrator | manager-netbox-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" netbox About a minute ago Up About a minute (healthy) 2025-05-28 16:50:07.865930 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2025-05-28 16:50:07.866432 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.3-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-05-28 16:50:07.866458 | orchestrator | manager-watchdog-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" watchdog About a minute ago Up About a minute (healthy) 2025-05-28 16:50:07.866469 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-05-28 16:50:07.866481 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-05-28 16:50:07.866492 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2025-05-28 16:50:07.872222 | orchestrator | + docker compose --project-directory /opt/netbox ps 2025-05-28 16:50:08.024601 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-05-28 16:50:08.024725 | orchestrator | netbox-netbox-1 registry.osism.tech/osism/netbox:v4.2.2 "/usr/bin/tini -- /o…" netbox 8 minutes ago Up 7 minutes (healthy) 2025-05-28 16:50:08.024768 | orchestrator | netbox-netbox-worker-1 registry.osism.tech/osism/netbox:v4.2.2 "/opt/netbox/venv/bi…" netbox-worker 8 minutes ago Up 3 minutes (healthy) 2025-05-28 16:50:08.024781 | orchestrator | netbox-postgres-1 registry.osism.tech/dockerhub/library/postgres:16.9-alpine "docker-entrypoint.s…" postgres 8 minutes ago Up 7 minutes (healthy) 5432/tcp 2025-05-28 16:50:08.024795 | orchestrator | netbox-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.3-alpine "docker-entrypoint.s…" redis 8 minutes ago Up 7 minutes (healthy) 6379/tcp 2025-05-28 16:50:08.034242 | orchestrator | ++ semver latest 7.0.0 2025-05-28 16:50:08.086849 | orchestrator | + [[ -1 -ge 0 ]] 2025-05-28 16:50:08.086950 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-05-28 16:50:08.086965 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-05-28 16:50:08.091506 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-05-28 16:50:09.784364 | orchestrator | Registering Redlock._acquired_script 2025-05-28 16:50:09.784483 | orchestrator | Registering Redlock._extend_script 2025-05-28 16:50:09.784498 | orchestrator | Registering Redlock._release_script 2025-05-28 16:50:09.973794 | orchestrator | 2025-05-28 16:50:09 | INFO  | Task ac289b33-53d1-4643-b0fd-ab6f8a3aef9b (resolvconf) was prepared for execution. 2025-05-28 16:50:09.974316 | orchestrator | 2025-05-28 16:50:09 | INFO  | It takes a moment until task ac289b33-53d1-4643-b0fd-ab6f8a3aef9b (resolvconf) has been started and output is visible here. 2025-05-28 16:50:13.892155 | orchestrator | 2025-05-28 16:50:13.892252 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-05-28 16:50:13.892974 | orchestrator | 2025-05-28 16:50:13.893210 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-28 16:50:13.893678 | orchestrator | Wednesday 28 May 2025 16:50:13 +0000 (0:00:00.153) 0:00:00.153 ********* 2025-05-28 16:50:17.933370 | orchestrator | ok: [testbed-manager] 2025-05-28 16:50:17.933505 | orchestrator | 2025-05-28 16:50:17.933657 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-05-28 16:50:17.934503 | orchestrator | Wednesday 28 May 2025 16:50:17 +0000 (0:00:04.043) 0:00:04.197 ********* 2025-05-28 16:50:17.986495 | orchestrator | skipping: [testbed-manager] 2025-05-28 16:50:17.986637 | orchestrator | 2025-05-28 16:50:17.987876 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-05-28 16:50:17.989528 | orchestrator | Wednesday 28 May 2025 16:50:17 +0000 (0:00:00.054) 0:00:04.251 ********* 2025-05-28 16:50:18.065381 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-05-28 16:50:18.065737 | orchestrator | 2025-05-28 16:50:18.066667 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-05-28 16:50:18.067488 | orchestrator | Wednesday 28 May 2025 16:50:18 +0000 (0:00:00.078) 0:00:04.330 ********* 2025-05-28 16:50:18.144317 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-05-28 16:50:18.145429 | orchestrator | 2025-05-28 16:50:18.146535 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-05-28 16:50:18.147200 | orchestrator | Wednesday 28 May 2025 16:50:18 +0000 (0:00:00.078) 0:00:04.409 ********* 2025-05-28 16:50:19.250834 | orchestrator | ok: [testbed-manager] 2025-05-28 16:50:19.251598 | orchestrator | 2025-05-28 16:50:19.253457 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-05-28 16:50:19.254521 | orchestrator | Wednesday 28 May 2025 16:50:19 +0000 (0:00:01.103) 0:00:05.512 ********* 2025-05-28 16:50:19.311152 | orchestrator | skipping: [testbed-manager] 2025-05-28 16:50:19.311918 | orchestrator | 2025-05-28 16:50:19.313051 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-05-28 16:50:19.313667 | orchestrator | Wednesday 28 May 2025 16:50:19 +0000 (0:00:00.064) 0:00:05.577 ********* 2025-05-28 16:50:19.793924 | orchestrator | ok: [testbed-manager] 2025-05-28 16:50:19.794486 | orchestrator | 2025-05-28 16:50:19.795302 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-05-28 16:50:19.795761 | orchestrator | Wednesday 28 May 2025 16:50:19 +0000 (0:00:00.481) 0:00:06.058 ********* 2025-05-28 16:50:19.876666 | orchestrator | skipping: [testbed-manager] 2025-05-28 16:50:19.876782 | orchestrator | 2025-05-28 16:50:19.876797 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-05-28 16:50:19.876810 | orchestrator | Wednesday 28 May 2025 16:50:19 +0000 (0:00:00.081) 0:00:06.140 ********* 2025-05-28 16:50:20.474781 | orchestrator | changed: [testbed-manager] 2025-05-28 16:50:20.475098 | orchestrator | 2025-05-28 16:50:20.475798 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-05-28 16:50:20.476494 | orchestrator | Wednesday 28 May 2025 16:50:20 +0000 (0:00:00.598) 0:00:06.738 ********* 2025-05-28 16:50:21.665313 | orchestrator | changed: [testbed-manager] 2025-05-28 16:50:21.665534 | orchestrator | 2025-05-28 16:50:21.666679 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-05-28 16:50:21.667556 | orchestrator | Wednesday 28 May 2025 16:50:21 +0000 (0:00:01.189) 0:00:07.928 ********* 2025-05-28 16:50:22.667739 | orchestrator | ok: [testbed-manager] 2025-05-28 16:50:22.668702 | orchestrator | 2025-05-28 16:50:22.669437 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-05-28 16:50:22.669891 | orchestrator | Wednesday 28 May 2025 16:50:22 +0000 (0:00:01.003) 0:00:08.931 ********* 2025-05-28 16:50:22.755519 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-05-28 16:50:22.756282 | orchestrator | 2025-05-28 16:50:22.757098 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-05-28 16:50:22.757567 | orchestrator | Wednesday 28 May 2025 16:50:22 +0000 (0:00:00.089) 0:00:09.021 ********* 2025-05-28 16:50:23.914253 | orchestrator | changed: [testbed-manager] 2025-05-28 16:50:23.914784 | orchestrator | 2025-05-28 16:50:23.916463 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 16:50:23.916576 | orchestrator | 2025-05-28 16:50:23 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-28 16:50:23.916594 | orchestrator | 2025-05-28 16:50:23 | INFO  | Please wait and do not abort execution. 2025-05-28 16:50:23.917261 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-28 16:50:23.917647 | orchestrator | 2025-05-28 16:50:23.918462 | orchestrator | 2025-05-28 16:50:23.919305 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 16:50:23.920117 | orchestrator | Wednesday 28 May 2025 16:50:23 +0000 (0:00:01.159) 0:00:10.180 ********* 2025-05-28 16:50:23.920615 | orchestrator | =============================================================================== 2025-05-28 16:50:23.921255 | orchestrator | Gathering Facts --------------------------------------------------------- 4.04s 2025-05-28 16:50:23.922205 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.19s 2025-05-28 16:50:23.922578 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.16s 2025-05-28 16:50:23.923305 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.10s 2025-05-28 16:50:23.924041 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.00s 2025-05-28 16:50:23.924286 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.60s 2025-05-28 16:50:23.924731 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.48s 2025-05-28 16:50:23.925156 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.09s 2025-05-28 16:50:23.925567 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2025-05-28 16:50:23.925922 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2025-05-28 16:50:23.926348 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.08s 2025-05-28 16:50:23.927347 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2025-05-28 16:50:23.927787 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.05s 2025-05-28 16:50:24.392883 | orchestrator | + osism apply sshconfig 2025-05-28 16:50:26.054351 | orchestrator | Registering Redlock._acquired_script 2025-05-28 16:50:26.054457 | orchestrator | Registering Redlock._extend_script 2025-05-28 16:50:26.054470 | orchestrator | Registering Redlock._release_script 2025-05-28 16:50:26.120234 | orchestrator | 2025-05-28 16:50:26 | INFO  | Task 6a9b9546-789c-496e-9f16-84c43720c3da (sshconfig) was prepared for execution. 2025-05-28 16:50:26.120335 | orchestrator | 2025-05-28 16:50:26 | INFO  | It takes a moment until task 6a9b9546-789c-496e-9f16-84c43720c3da (sshconfig) has been started and output is visible here. 2025-05-28 16:50:30.068839 | orchestrator | 2025-05-28 16:50:30.069354 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-05-28 16:50:30.070418 | orchestrator | 2025-05-28 16:50:30.072281 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-05-28 16:50:30.073365 | orchestrator | Wednesday 28 May 2025 16:50:30 +0000 (0:00:00.164) 0:00:00.164 ********* 2025-05-28 16:50:30.624683 | orchestrator | ok: [testbed-manager] 2025-05-28 16:50:30.625221 | orchestrator | 2025-05-28 16:50:30.626207 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-05-28 16:50:30.626855 | orchestrator | Wednesday 28 May 2025 16:50:30 +0000 (0:00:00.559) 0:00:00.724 ********* 2025-05-28 16:50:31.157446 | orchestrator | changed: [testbed-manager] 2025-05-28 16:50:31.157547 | orchestrator | 2025-05-28 16:50:31.157608 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-05-28 16:50:31.160271 | orchestrator | Wednesday 28 May 2025 16:50:31 +0000 (0:00:00.532) 0:00:01.256 ********* 2025-05-28 16:50:36.896844 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-05-28 16:50:36.897422 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-05-28 16:50:36.897610 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-05-28 16:50:36.899111 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-05-28 16:50:36.900045 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-05-28 16:50:36.900484 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-05-28 16:50:36.901229 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-05-28 16:50:36.901748 | orchestrator | 2025-05-28 16:50:36.902333 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-05-28 16:50:36.903926 | orchestrator | Wednesday 28 May 2025 16:50:36 +0000 (0:00:05.737) 0:00:06.994 ********* 2025-05-28 16:50:36.963198 | orchestrator | skipping: [testbed-manager] 2025-05-28 16:50:36.963564 | orchestrator | 2025-05-28 16:50:36.964355 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-05-28 16:50:36.965111 | orchestrator | Wednesday 28 May 2025 16:50:36 +0000 (0:00:00.066) 0:00:07.060 ********* 2025-05-28 16:50:37.530003 | orchestrator | changed: [testbed-manager] 2025-05-28 16:50:37.530823 | orchestrator | 2025-05-28 16:50:37.531606 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 16:50:37.532580 | orchestrator | 2025-05-28 16:50:37 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-28 16:50:37.532609 | orchestrator | 2025-05-28 16:50:37 | INFO  | Please wait and do not abort execution. 2025-05-28 16:50:37.534391 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-28 16:50:37.534861 | orchestrator | 2025-05-28 16:50:37.535402 | orchestrator | 2025-05-28 16:50:37.535811 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 16:50:37.536693 | orchestrator | Wednesday 28 May 2025 16:50:37 +0000 (0:00:00.566) 0:00:07.627 ********* 2025-05-28 16:50:37.536950 | orchestrator | =============================================================================== 2025-05-28 16:50:37.537911 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.74s 2025-05-28 16:50:37.538217 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.57s 2025-05-28 16:50:37.538680 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.56s 2025-05-28 16:50:37.539155 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.53s 2025-05-28 16:50:37.540001 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.07s 2025-05-28 16:50:37.980571 | orchestrator | + osism apply known-hosts 2025-05-28 16:50:39.641844 | orchestrator | Registering Redlock._acquired_script 2025-05-28 16:50:39.642784 | orchestrator | Registering Redlock._extend_script 2025-05-28 16:50:39.642818 | orchestrator | Registering Redlock._release_script 2025-05-28 16:50:39.708746 | orchestrator | 2025-05-28 16:50:39 | INFO  | Task a534b290-3675-4d8a-8a35-27dddf5eca60 (known-hosts) was prepared for execution. 2025-05-28 16:50:39.708886 | orchestrator | 2025-05-28 16:50:39 | INFO  | It takes a moment until task a534b290-3675-4d8a-8a35-27dddf5eca60 (known-hosts) has been started and output is visible here. 2025-05-28 16:50:43.716293 | orchestrator | 2025-05-28 16:50:43.716425 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-05-28 16:50:43.717042 | orchestrator | 2025-05-28 16:50:43.718096 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-05-28 16:50:43.719268 | orchestrator | Wednesday 28 May 2025 16:50:43 +0000 (0:00:00.172) 0:00:00.172 ********* 2025-05-28 16:50:49.688002 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-05-28 16:50:49.688198 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-05-28 16:50:49.688217 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-05-28 16:50:49.688856 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-05-28 16:50:49.689904 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-05-28 16:50:49.691205 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-05-28 16:50:49.691529 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-05-28 16:50:49.692266 | orchestrator | 2025-05-28 16:50:49.693071 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-05-28 16:50:49.695079 | orchestrator | Wednesday 28 May 2025 16:50:49 +0000 (0:00:05.971) 0:00:06.144 ********* 2025-05-28 16:50:49.867575 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-05-28 16:50:49.868426 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-05-28 16:50:49.870451 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-05-28 16:50:49.871306 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-05-28 16:50:49.872670 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-05-28 16:50:49.873134 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-05-28 16:50:49.874325 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-05-28 16:50:49.874577 | orchestrator | 2025-05-28 16:50:49.875600 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-28 16:50:49.876039 | orchestrator | Wednesday 28 May 2025 16:50:49 +0000 (0:00:00.182) 0:00:06.327 ********* 2025-05-28 16:50:51.052654 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLTfPSoLtX7K3ShJ6yhl1P8lPBgqknhfPXG3Ykt5QgNojH5aJtkx3ntiHDL3YVfcJKL2Igtq0xzLPXSDJvdH5lU=) 2025-05-28 16:50:51.053644 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDF9pdZrT8H6wUSZUQSI0Qvfm6ir5Gy8K0aDI3BymIZIm53AGR3/ijC1hKSSzSt6qDMCqwM0wuxrNM6Jynjx+PV3GNzQs60o/vUzQsojfE6n2ZDGfBusahpGCCugvlgcGj+8uMQWiGgEIYTz6Jal2772uKaiR4VC02Trx0j7F27+0Iv2wD9In50kU5jnYRDQ8cacy8UIoQKHQKnFejcfcHXYZEBf2Ki72bb2ndusBexYfpXGO9QMO2tHDdGhz8Qgd61XntLndC8tj+1y8OvVGBbV3Uh9bApNeY+hf/uw+qdslne+LUOsgZ9rLcqKJDerRvdxKv+wUcUsVm7IQRdD2hEEFDLTuAPaFpDuV4Ceyw2ZuERuY88X8+6FQ5aMgioHJAQwu1RZGstVDo7CTg9YQajb0MovzvNCzEwczUBkfhySbMwhaOCiGMks6hBCBjsX2ioU3WQ+K0ZH3WEpTFKap2rRPFFwRjkczZBjYCgBYN7FdPHMnIQBozxtifErb+AOFs=) 2025-05-28 16:50:51.054316 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJBDlNRJG8XNe+Loc4HeRRIdt0MKTY14j0J42V/zbytX) 2025-05-28 16:50:51.054955 | orchestrator | 2025-05-28 16:50:51.055498 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-28 16:50:51.056172 | orchestrator | Wednesday 28 May 2025 16:50:51 +0000 (0:00:01.183) 0:00:07.510 ********* 2025-05-28 16:50:52.163424 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDDq1AAb3zgclfgf7UdWwQ/qusTCmjkkSmEPjvw7A7OU9ADIE/1JssdEK+gxxUG5VJ6Ce883VywKAe74eMcZc5NI0puRiGrFdKNpVBUKx7dgkMzFfXDJ0Nf128TgR2oGKpVNkRMd+bBPJZ4Pbc5o8UX+WEd+PDxx1XmH42RH8JsI9KZEVs6PR+lY6ipUl9SLwXjnVcHWsjvYZd97b/s7Xa4rZY3gtCvPtJjQXD4Ny2jlIExRgOFIWJKBmqXy4sNhpbyMJpKutTacBg/WX6H3UoZ9PX7WYmJkEAtjexTrh6RbQJFiCAsUJ8LPWNjgAghLfn1Lh1MzRzLirUWAsVoYZ9rEWOxLQjt/AyYC2pKcMG6eVuTpKTIMFNQ7lv11YQW9YmoZoInL1zNylhdxgk19ajcDVB/sbPx6cdGgMWXQGn5hkwVQSXnwf8bBFjREr5yyqcFqE2JddLHc6+pR2f0tqirp4fBsv5S1l16IZLoRS8Jf50KGUMPz3kglZbRJCFI6pE=) 2025-05-28 16:50:52.163637 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK/v/gOkj43nS9U2o40PL41Tnef+00sg93X+8UNChPezlUv/lXPbashag+k8z/bltto/lj+Y/ZbFSZ4F9RLOrXM=) 2025-05-28 16:50:52.163659 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHhCMNmuiMcAZx2qs/2W/1o9uc1/w60zCQ5hAvDzQQxn) 2025-05-28 16:50:52.163748 | orchestrator | 2025-05-28 16:50:52.163764 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-28 16:50:52.164142 | orchestrator | Wednesday 28 May 2025 16:50:52 +0000 (0:00:01.110) 0:00:08.621 ********* 2025-05-28 16:50:53.250376 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCWZ7pVXJG6MGZqdozwLEANhqYWhPZs0A2HpGi9utDRY/Nrd8pSOb6BLFukMaXon62WrAJz+t9aQp4x22YqybHK1iDMAOzfc25vJRRUA2fuL6CUYBpNf3F1fihl95ieOvS3omg6c9gDJ/UwZcsuLAYua6v2kbeQ3DECPibmWT2800IT7l65iraVkT6rUkixuhTyrnq2i7NZ0jOg0P5PVOjUFcXhYwgMj3srIjR9tMKW6Bn/6OSX0J3tTVKn7tfHY5jdicmi/3SDBWbi4fOZ3fxpZR7IiNqLoTuKpaPQxMBpA6Yq45gHNF/wPEnO20w7lPofKp5BxXOQzSNqS7y50UzxxS1fK8nVwooo5z1K++bIxEfFJSMa2FF5IDYPnmUu5ANIWPM6jvfPqrebnv/eeeDAR722OsbCg++4IXljPRqZ9bqnloGMMDDb3nRl4B9lWYOUpW3A4FA8CDy6V7UEikD7Hy2oLiAHR/dtD6zQ8OvpyQ1AcBIz/osFGQbHlxS7qRc=) 2025-05-28 16:50:53.250558 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBxgR+eXHvvMI1ucz1QAsAMGaMltUx5HX77Eq51EcSsc5xb6VmShuWSf4BtmArNV8Cx2Rn1AxiR2bICNbBKfd0c=) 2025-05-28 16:50:53.251323 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILUFsptCrwtlxsIp1oEirVZXPbLs2OO0vHi/8Im7TBO0) 2025-05-28 16:50:53.251946 | orchestrator | 2025-05-28 16:50:53.252683 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-28 16:50:53.253936 | orchestrator | Wednesday 28 May 2025 16:50:53 +0000 (0:00:01.088) 0:00:09.709 ********* 2025-05-28 16:50:54.329224 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIL0ZUUVdOMcA7O0Wpcg+M92+aXAmWlz37ihrh7TH2ob9) 2025-05-28 16:50:54.329965 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDNngb/Z1/DQvhw481Blrug2Jj7ftb0fuRkXrTZuVXjh37DMaBEHIzoQ9OhHQG8VyXYuJsV0iCEXgLoI5cbCXKNxkSsioVJms09N72Hb2r7KCzxz3JeTY1c9KuWSy47RpXr3CH5uYN8jpolQehQHxoX/8Ywi/yqAtimJK/u9SCA4xBfLabeMKjH4QSCbJpbpnQGsIiz+3cQsAhd6T/yYRKiDUymqarouiH9q2Cg1fVeoQPiEfyTM7zthKD940SU8oG1mZomO5lbjmh6XdAc3aGzARVGtxm7cH8KmkmC6SxvqIndtfdvl6jXSbcXTNpQrsE3bY2cw7HiCTc1XW8Iu8JRkJa+X0Ua+Rtdyr4FwO8+XYJe0e5qQXSW/ctIldQmcoFfAdtjI6j8kHwjpIXByr4kDMgpksBs919Awl93beQUiqb1h/f3i5AHF9iamIa3ZJOElZDEEaaYeBXKt1lRSDUw4hv2aGhy94HMT2GAmrVT7V9ftt9kBS0W0r7x22i7dAM=) 2025-05-28 16:50:54.330875 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOJvuBd7W24CDWWHKFXbjw3HLP5biCzDkhti29HePV90XkkTikWCLKAZLzBaejXOLu4UzvtISYA6+5m9ssobNJ8=) 2025-05-28 16:50:54.332025 | orchestrator | 2025-05-28 16:50:54.332686 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-28 16:50:54.333520 | orchestrator | Wednesday 28 May 2025 16:50:54 +0000 (0:00:01.077) 0:00:10.786 ********* 2025-05-28 16:50:55.402607 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDhfKL1DnvssXynZBlrmm0Lui7RchL+tB9y7v/kA2ybq98a3fDI0m8W7aEjravxwLIh1Di4uvsYFS3DkfVKbMm6sScIayHKeSmoTK/H8s9Ql7K4suyT3aZSACm+0cAT4ag1ovA2tFSPBZdAGg0dBIOE3EPd/1ongvqlVFxJNDbuZx7JxRbEQFqi4Iz97dCGyAazX3EnctHs9CNwNrhv7EGK96j4E5k7z++37YiGfdDOMpGqLq6BuGUEUN2/bF2ETO0lmS3Tm2UlMOc+MbhRW3d2Ra6GVS0mlmr2HTtZMlKIUspIny6daHmiuj7HgX0/31A4PSi0I5KTfYDF7n7HoNi3PstyKp/hFH3c+dEk1qplBPzWuvweMY7kyUz8m4v6Npj2w43E9fxkn5c/LVaLsM5wbev6fVbz2cLxX7rhTtg7rE7LqDY1PlW4PWa0wa5yFt/Ax5IbfKRHOTHJCwEKIXpaozn4mIALNAnwsAjvOrEvNvHJrrj+RNWbTTZkg4iotQ0=) 2025-05-28 16:50:55.403515 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOb2U1V/tgg9tsBmXSEe66JD0Lvvwpmfr5++xr724dCfGXZVOPJ2rZZMD5OCd5cfkSUY8kfIAAIC5D3BF9wi4YY=) 2025-05-28 16:50:55.404257 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINPBDyUr8fJPOegzsijh+PAGHHAY74Lkn7AC0SvoqGWM) 2025-05-28 16:50:55.404826 | orchestrator | 2025-05-28 16:50:55.405926 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-28 16:50:55.406399 | orchestrator | Wednesday 28 May 2025 16:50:55 +0000 (0:00:01.074) 0:00:11.861 ********* 2025-05-28 16:50:56.458355 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDbiXeADao8oR0K/HxCNQedAzSFFHiafavf3hg9UFvsLtK5jldU0DpJacdR2+ohT3xSfJZ6XvfpN7+BEOD7c2sCGPjmPuLqNuKxNm41RPCAAi9fdPxeEo4PMJH5cbFiTQJ9K4LR16zhpc0hJvWMX7eFUc71pmcB6zEZjCruhRJDsm/bJ4LMz46bW6yonGQemITGMXGdOzVvRvl7r06Dc8nRErq7neLm9d0sjwjxzHu+qLx6b1algzFuI79qK+lVT9D2GyDeVHrlr489OIiN1jqai+WfUzhII+IXcl4EUTLWt+5uCsFgSMQUZWEzBW/SH6MNJovKE5W8MBOTWtv1GMolYhR4BE6ojJp++jUynA/mSOh+xKJvSVGDRKWHCTLuimvxbM8T5tjlkt19fv6I2torlpSZBjCibVZ3QU+2OApSBwr+ruLlKl+kGQRHDe6jzGbr2lpBv9Ol+ujv3fMHy+TWmJdP+gHmH3KhYOpl9td/NHS/dZJ3fu+rmp1wFyUDJQ8=) 2025-05-28 16:50:56.459295 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFzz0pi8V8Bn19UBDLHRwCa0P0bntA/PhBjyQOJZSRwVVG6je9N87HuoFm4wLOwcGT6Vl5JJP2DTarnMlwNMyjU=) 2025-05-28 16:50:56.459347 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICgMk9m7x8/YJqJhINcTxLYZBiok+yykIQkeW4E5VLVL) 2025-05-28 16:50:56.459362 | orchestrator | 2025-05-28 16:50:56.460136 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-28 16:50:56.460469 | orchestrator | Wednesday 28 May 2025 16:50:56 +0000 (0:00:01.050) 0:00:12.912 ********* 2025-05-28 16:50:57.552914 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEGO7pQv8lcH7eko0/7YypNjz291PhNRDMIa7IQPBkhOaiIx0xSPgEOYRkvsyLJKGJdtX+FikxJXrpSd/rC6zso=) 2025-05-28 16:50:57.553053 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDjCYVb9hiq1fr3AhwcvEKuU+x//a9WpLxR4b5j73xKLA/zxfNUiDKWItklG7RtpucS0l23dranfApj48OKeHV+bqkPLcERNELIJvUOjEirSajsN93moCRIpXue7nKuVe5wqEoFDu6BLfvWIukvreoPAjbhZw1hU8QNszQs/0gEfmmbeux6EoF56IYBNxYIbkDsloTq88V+EkaJAQAuF8I7g3sjWd0HsdbRVxqC0U3mJNLC8mMDK26dj4maJgAHYG5Qip5c826H0B031W16b+Q7oNnQODrJTW7OCusCGWS4q2YrkpQPoC+22Iw/Et2vvaApGIPNmTPCMkWaJNagAhWRj5xMyof+pD5qYRJvyClD6C9h17v53g/eaCyphWWvWOM02MT7RnFf2otBn/rRm3eLPDU6auakbAYjFpikjDM6hBppm7vlIUDkTSzhDr4buytYzPDUQ89LiejoFUPgqsegLEbOFuH6MIZm8D8aFNwlCIUwmz5IqfyuMmk8APurjXc=) 2025-05-28 16:50:57.553806 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIID1/bsBh285moEiOIYG8VzBVRnQO7g8GPBwyk7YiTdy) 2025-05-28 16:50:57.554819 | orchestrator | 2025-05-28 16:50:57.555511 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-05-28 16:50:57.556334 | orchestrator | Wednesday 28 May 2025 16:50:57 +0000 (0:00:01.098) 0:00:14.010 ********* 2025-05-28 16:51:02.831756 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-05-28 16:51:02.831900 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-05-28 16:51:02.833264 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-05-28 16:51:02.833295 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-05-28 16:51:02.834422 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-05-28 16:51:02.834584 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-05-28 16:51:02.836294 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-05-28 16:51:02.836833 | orchestrator | 2025-05-28 16:51:02.837371 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-05-28 16:51:02.837816 | orchestrator | Wednesday 28 May 2025 16:51:02 +0000 (0:00:05.279) 0:00:19.289 ********* 2025-05-28 16:51:03.010321 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-05-28 16:51:03.012531 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-05-28 16:51:03.016042 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-05-28 16:51:03.017246 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-05-28 16:51:03.019723 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-05-28 16:51:03.019855 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-05-28 16:51:03.020335 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-05-28 16:51:03.020362 | orchestrator | 2025-05-28 16:51:03.021387 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-28 16:51:03.021748 | orchestrator | Wednesday 28 May 2025 16:51:03 +0000 (0:00:00.180) 0:00:19.469 ********* 2025-05-28 16:51:04.147568 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJBDlNRJG8XNe+Loc4HeRRIdt0MKTY14j0J42V/zbytX) 2025-05-28 16:51:04.148455 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDF9pdZrT8H6wUSZUQSI0Qvfm6ir5Gy8K0aDI3BymIZIm53AGR3/ijC1hKSSzSt6qDMCqwM0wuxrNM6Jynjx+PV3GNzQs60o/vUzQsojfE6n2ZDGfBusahpGCCugvlgcGj+8uMQWiGgEIYTz6Jal2772uKaiR4VC02Trx0j7F27+0Iv2wD9In50kU5jnYRDQ8cacy8UIoQKHQKnFejcfcHXYZEBf2Ki72bb2ndusBexYfpXGO9QMO2tHDdGhz8Qgd61XntLndC8tj+1y8OvVGBbV3Uh9bApNeY+hf/uw+qdslne+LUOsgZ9rLcqKJDerRvdxKv+wUcUsVm7IQRdD2hEEFDLTuAPaFpDuV4Ceyw2ZuERuY88X8+6FQ5aMgioHJAQwu1RZGstVDo7CTg9YQajb0MovzvNCzEwczUBkfhySbMwhaOCiGMks6hBCBjsX2ioU3WQ+K0ZH3WEpTFKap2rRPFFwRjkczZBjYCgBYN7FdPHMnIQBozxtifErb+AOFs=) 2025-05-28 16:51:04.149323 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLTfPSoLtX7K3ShJ6yhl1P8lPBgqknhfPXG3Ykt5QgNojH5aJtkx3ntiHDL3YVfcJKL2Igtq0xzLPXSDJvdH5lU=) 2025-05-28 16:51:04.149706 | orchestrator | 2025-05-28 16:51:04.150447 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-28 16:51:04.151202 | orchestrator | Wednesday 28 May 2025 16:51:04 +0000 (0:00:01.136) 0:00:20.606 ********* 2025-05-28 16:51:05.226890 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHhCMNmuiMcAZx2qs/2W/1o9uc1/w60zCQ5hAvDzQQxn) 2025-05-28 16:51:05.227006 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDDq1AAb3zgclfgf7UdWwQ/qusTCmjkkSmEPjvw7A7OU9ADIE/1JssdEK+gxxUG5VJ6Ce883VywKAe74eMcZc5NI0puRiGrFdKNpVBUKx7dgkMzFfXDJ0Nf128TgR2oGKpVNkRMd+bBPJZ4Pbc5o8UX+WEd+PDxx1XmH42RH8JsI9KZEVs6PR+lY6ipUl9SLwXjnVcHWsjvYZd97b/s7Xa4rZY3gtCvPtJjQXD4Ny2jlIExRgOFIWJKBmqXy4sNhpbyMJpKutTacBg/WX6H3UoZ9PX7WYmJkEAtjexTrh6RbQJFiCAsUJ8LPWNjgAghLfn1Lh1MzRzLirUWAsVoYZ9rEWOxLQjt/AyYC2pKcMG6eVuTpKTIMFNQ7lv11YQW9YmoZoInL1zNylhdxgk19ajcDVB/sbPx6cdGgMWXQGn5hkwVQSXnwf8bBFjREr5yyqcFqE2JddLHc6+pR2f0tqirp4fBsv5S1l16IZLoRS8Jf50KGUMPz3kglZbRJCFI6pE=) 2025-05-28 16:51:05.227027 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK/v/gOkj43nS9U2o40PL41Tnef+00sg93X+8UNChPezlUv/lXPbashag+k8z/bltto/lj+Y/ZbFSZ4F9RLOrXM=) 2025-05-28 16:51:05.227040 | orchestrator | 2025-05-28 16:51:05.227442 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-28 16:51:05.227606 | orchestrator | Wednesday 28 May 2025 16:51:05 +0000 (0:00:01.079) 0:00:21.685 ********* 2025-05-28 16:51:06.306545 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCWZ7pVXJG6MGZqdozwLEANhqYWhPZs0A2HpGi9utDRY/Nrd8pSOb6BLFukMaXon62WrAJz+t9aQp4x22YqybHK1iDMAOzfc25vJRRUA2fuL6CUYBpNf3F1fihl95ieOvS3omg6c9gDJ/UwZcsuLAYua6v2kbeQ3DECPibmWT2800IT7l65iraVkT6rUkixuhTyrnq2i7NZ0jOg0P5PVOjUFcXhYwgMj3srIjR9tMKW6Bn/6OSX0J3tTVKn7tfHY5jdicmi/3SDBWbi4fOZ3fxpZR7IiNqLoTuKpaPQxMBpA6Yq45gHNF/wPEnO20w7lPofKp5BxXOQzSNqS7y50UzxxS1fK8nVwooo5z1K++bIxEfFJSMa2FF5IDYPnmUu5ANIWPM6jvfPqrebnv/eeeDAR722OsbCg++4IXljPRqZ9bqnloGMMDDb3nRl4B9lWYOUpW3A4FA8CDy6V7UEikD7Hy2oLiAHR/dtD6zQ8OvpyQ1AcBIz/osFGQbHlxS7qRc=) 2025-05-28 16:51:06.306663 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBxgR+eXHvvMI1ucz1QAsAMGaMltUx5HX77Eq51EcSsc5xb6VmShuWSf4BtmArNV8Cx2Rn1AxiR2bICNbBKfd0c=) 2025-05-28 16:51:06.307211 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILUFsptCrwtlxsIp1oEirVZXPbLs2OO0vHi/8Im7TBO0) 2025-05-28 16:51:06.308060 | orchestrator | 2025-05-28 16:51:06.309457 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-28 16:51:06.310393 | orchestrator | Wednesday 28 May 2025 16:51:06 +0000 (0:00:01.078) 0:00:22.764 ********* 2025-05-28 16:51:07.373623 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIL0ZUUVdOMcA7O0Wpcg+M92+aXAmWlz37ihrh7TH2ob9) 2025-05-28 16:51:07.374747 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDNngb/Z1/DQvhw481Blrug2Jj7ftb0fuRkXrTZuVXjh37DMaBEHIzoQ9OhHQG8VyXYuJsV0iCEXgLoI5cbCXKNxkSsioVJms09N72Hb2r7KCzxz3JeTY1c9KuWSy47RpXr3CH5uYN8jpolQehQHxoX/8Ywi/yqAtimJK/u9SCA4xBfLabeMKjH4QSCbJpbpnQGsIiz+3cQsAhd6T/yYRKiDUymqarouiH9q2Cg1fVeoQPiEfyTM7zthKD940SU8oG1mZomO5lbjmh6XdAc3aGzARVGtxm7cH8KmkmC6SxvqIndtfdvl6jXSbcXTNpQrsE3bY2cw7HiCTc1XW8Iu8JRkJa+X0Ua+Rtdyr4FwO8+XYJe0e5qQXSW/ctIldQmcoFfAdtjI6j8kHwjpIXByr4kDMgpksBs919Awl93beQUiqb1h/f3i5AHF9iamIa3ZJOElZDEEaaYeBXKt1lRSDUw4hv2aGhy94HMT2GAmrVT7V9ftt9kBS0W0r7x22i7dAM=) 2025-05-28 16:51:07.375050 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOJvuBd7W24CDWWHKFXbjw3HLP5biCzDkhti29HePV90XkkTikWCLKAZLzBaejXOLu4UzvtISYA6+5m9ssobNJ8=) 2025-05-28 16:51:07.376796 | orchestrator | 2025-05-28 16:51:07.378599 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-28 16:51:07.378878 | orchestrator | Wednesday 28 May 2025 16:51:07 +0000 (0:00:01.068) 0:00:23.832 ********* 2025-05-28 16:51:08.424132 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOb2U1V/tgg9tsBmXSEe66JD0Lvvwpmfr5++xr724dCfGXZVOPJ2rZZMD5OCd5cfkSUY8kfIAAIC5D3BF9wi4YY=) 2025-05-28 16:51:08.426194 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINPBDyUr8fJPOegzsijh+PAGHHAY74Lkn7AC0SvoqGWM) 2025-05-28 16:51:08.426856 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDhfKL1DnvssXynZBlrmm0Lui7RchL+tB9y7v/kA2ybq98a3fDI0m8W7aEjravxwLIh1Di4uvsYFS3DkfVKbMm6sScIayHKeSmoTK/H8s9Ql7K4suyT3aZSACm+0cAT4ag1ovA2tFSPBZdAGg0dBIOE3EPd/1ongvqlVFxJNDbuZx7JxRbEQFqi4Iz97dCGyAazX3EnctHs9CNwNrhv7EGK96j4E5k7z++37YiGfdDOMpGqLq6BuGUEUN2/bF2ETO0lmS3Tm2UlMOc+MbhRW3d2Ra6GVS0mlmr2HTtZMlKIUspIny6daHmiuj7HgX0/31A4PSi0I5KTfYDF7n7HoNi3PstyKp/hFH3c+dEk1qplBPzWuvweMY7kyUz8m4v6Npj2w43E9fxkn5c/LVaLsM5wbev6fVbz2cLxX7rhTtg7rE7LqDY1PlW4PWa0wa5yFt/Ax5IbfKRHOTHJCwEKIXpaozn4mIALNAnwsAjvOrEvNvHJrrj+RNWbTTZkg4iotQ0=) 2025-05-28 16:51:08.427898 | orchestrator | 2025-05-28 16:51:08.428893 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-28 16:51:08.429756 | orchestrator | Wednesday 28 May 2025 16:51:08 +0000 (0:00:01.049) 0:00:24.882 ********* 2025-05-28 16:51:09.483338 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDbiXeADao8oR0K/HxCNQedAzSFFHiafavf3hg9UFvsLtK5jldU0DpJacdR2+ohT3xSfJZ6XvfpN7+BEOD7c2sCGPjmPuLqNuKxNm41RPCAAi9fdPxeEo4PMJH5cbFiTQJ9K4LR16zhpc0hJvWMX7eFUc71pmcB6zEZjCruhRJDsm/bJ4LMz46bW6yonGQemITGMXGdOzVvRvl7r06Dc8nRErq7neLm9d0sjwjxzHu+qLx6b1algzFuI79qK+lVT9D2GyDeVHrlr489OIiN1jqai+WfUzhII+IXcl4EUTLWt+5uCsFgSMQUZWEzBW/SH6MNJovKE5W8MBOTWtv1GMolYhR4BE6ojJp++jUynA/mSOh+xKJvSVGDRKWHCTLuimvxbM8T5tjlkt19fv6I2torlpSZBjCibVZ3QU+2OApSBwr+ruLlKl+kGQRHDe6jzGbr2lpBv9Ol+ujv3fMHy+TWmJdP+gHmH3KhYOpl9td/NHS/dZJ3fu+rmp1wFyUDJQ8=) 2025-05-28 16:51:09.483440 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFzz0pi8V8Bn19UBDLHRwCa0P0bntA/PhBjyQOJZSRwVVG6je9N87HuoFm4wLOwcGT6Vl5JJP2DTarnMlwNMyjU=) 2025-05-28 16:51:09.483598 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICgMk9m7x8/YJqJhINcTxLYZBiok+yykIQkeW4E5VLVL) 2025-05-28 16:51:09.483935 | orchestrator | 2025-05-28 16:51:09.484667 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-28 16:51:09.484703 | orchestrator | Wednesday 28 May 2025 16:51:09 +0000 (0:00:01.060) 0:00:25.942 ********* 2025-05-28 16:51:10.533507 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIID1/bsBh285moEiOIYG8VzBVRnQO7g8GPBwyk7YiTdy) 2025-05-28 16:51:10.534880 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDjCYVb9hiq1fr3AhwcvEKuU+x//a9WpLxR4b5j73xKLA/zxfNUiDKWItklG7RtpucS0l23dranfApj48OKeHV+bqkPLcERNELIJvUOjEirSajsN93moCRIpXue7nKuVe5wqEoFDu6BLfvWIukvreoPAjbhZw1hU8QNszQs/0gEfmmbeux6EoF56IYBNxYIbkDsloTq88V+EkaJAQAuF8I7g3sjWd0HsdbRVxqC0U3mJNLC8mMDK26dj4maJgAHYG5Qip5c826H0B031W16b+Q7oNnQODrJTW7OCusCGWS4q2YrkpQPoC+22Iw/Et2vvaApGIPNmTPCMkWaJNagAhWRj5xMyof+pD5qYRJvyClD6C9h17v53g/eaCyphWWvWOM02MT7RnFf2otBn/rRm3eLPDU6auakbAYjFpikjDM6hBppm7vlIUDkTSzhDr4buytYzPDUQ89LiejoFUPgqsegLEbOFuH6MIZm8D8aFNwlCIUwmz5IqfyuMmk8APurjXc=) 2025-05-28 16:51:10.535777 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEGO7pQv8lcH7eko0/7YypNjz291PhNRDMIa7IQPBkhOaiIx0xSPgEOYRkvsyLJKGJdtX+FikxJXrpSd/rC6zso=) 2025-05-28 16:51:10.536577 | orchestrator | 2025-05-28 16:51:10.537921 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-05-28 16:51:10.538842 | orchestrator | Wednesday 28 May 2025 16:51:10 +0000 (0:00:01.048) 0:00:26.991 ********* 2025-05-28 16:51:10.948494 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-05-28 16:51:10.948711 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-05-28 16:51:10.949364 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-05-28 16:51:10.949450 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-05-28 16:51:10.950123 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-05-28 16:51:10.951517 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-05-28 16:51:10.952002 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-05-28 16:51:10.952419 | orchestrator | skipping: [testbed-manager] 2025-05-28 16:51:10.952670 | orchestrator | 2025-05-28 16:51:10.952981 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-05-28 16:51:10.953560 | orchestrator | Wednesday 28 May 2025 16:51:10 +0000 (0:00:00.416) 0:00:27.408 ********* 2025-05-28 16:51:11.000650 | orchestrator | skipping: [testbed-manager] 2025-05-28 16:51:11.000757 | orchestrator | 2025-05-28 16:51:11.001118 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-05-28 16:51:11.002596 | orchestrator | Wednesday 28 May 2025 16:51:10 +0000 (0:00:00.051) 0:00:27.459 ********* 2025-05-28 16:51:11.065290 | orchestrator | skipping: [testbed-manager] 2025-05-28 16:51:11.065740 | orchestrator | 2025-05-28 16:51:11.066736 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-05-28 16:51:11.067435 | orchestrator | Wednesday 28 May 2025 16:51:11 +0000 (0:00:00.065) 0:00:27.525 ********* 2025-05-28 16:51:11.586399 | orchestrator | changed: [testbed-manager] 2025-05-28 16:51:11.586606 | orchestrator | 2025-05-28 16:51:11.587342 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 16:51:11.587621 | orchestrator | 2025-05-28 16:51:11 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-28 16:51:11.587824 | orchestrator | 2025-05-28 16:51:11 | INFO  | Please wait and do not abort execution. 2025-05-28 16:51:11.588977 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-28 16:51:11.589780 | orchestrator | 2025-05-28 16:51:11.590490 | orchestrator | 2025-05-28 16:51:11.591448 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 16:51:11.591760 | orchestrator | Wednesday 28 May 2025 16:51:11 +0000 (0:00:00.520) 0:00:28.046 ********* 2025-05-28 16:51:11.592790 | orchestrator | =============================================================================== 2025-05-28 16:51:11.593458 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.97s 2025-05-28 16:51:11.594878 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.28s 2025-05-28 16:51:11.596062 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.18s 2025-05-28 16:51:11.597143 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2025-05-28 16:51:11.597357 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2025-05-28 16:51:11.598492 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2025-05-28 16:51:11.599047 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-05-28 16:51:11.599372 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-05-28 16:51:11.600641 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-05-28 16:51:11.601668 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-05-28 16:51:11.602530 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-05-28 16:51:11.604008 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-05-28 16:51:11.605221 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-05-28 16:51:11.606263 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-05-28 16:51:11.606867 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-05-28 16:51:11.607929 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-05-28 16:51:11.608823 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.52s 2025-05-28 16:51:11.609582 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.42s 2025-05-28 16:51:11.610639 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.18s 2025-05-28 16:51:11.611374 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.18s 2025-05-28 16:51:12.043747 | orchestrator | + osism apply squid 2025-05-28 16:51:13.687471 | orchestrator | Registering Redlock._acquired_script 2025-05-28 16:51:13.687937 | orchestrator | Registering Redlock._extend_script 2025-05-28 16:51:13.687970 | orchestrator | Registering Redlock._release_script 2025-05-28 16:51:13.745309 | orchestrator | 2025-05-28 16:51:13 | INFO  | Task 76ff4cba-c0c8-4ef3-bdba-22a5fd687317 (squid) was prepared for execution. 2025-05-28 16:51:13.745370 | orchestrator | 2025-05-28 16:51:13 | INFO  | It takes a moment until task 76ff4cba-c0c8-4ef3-bdba-22a5fd687317 (squid) has been started and output is visible here. 2025-05-28 16:51:17.706710 | orchestrator | 2025-05-28 16:51:17.708149 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-05-28 16:51:17.709015 | orchestrator | 2025-05-28 16:51:17.710565 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-05-28 16:51:17.711238 | orchestrator | Wednesday 28 May 2025 16:51:17 +0000 (0:00:00.185) 0:00:00.185 ********* 2025-05-28 16:51:17.796304 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-05-28 16:51:17.796920 | orchestrator | 2025-05-28 16:51:17.799329 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-05-28 16:51:17.799390 | orchestrator | Wednesday 28 May 2025 16:51:17 +0000 (0:00:00.093) 0:00:00.279 ********* 2025-05-28 16:51:19.213571 | orchestrator | ok: [testbed-manager] 2025-05-28 16:51:19.214521 | orchestrator | 2025-05-28 16:51:19.214559 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-05-28 16:51:19.214996 | orchestrator | Wednesday 28 May 2025 16:51:19 +0000 (0:00:01.416) 0:00:01.695 ********* 2025-05-28 16:51:20.368845 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-05-28 16:51:20.369024 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-05-28 16:51:20.370099 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-05-28 16:51:20.370759 | orchestrator | 2025-05-28 16:51:20.371361 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-05-28 16:51:20.371905 | orchestrator | Wednesday 28 May 2025 16:51:20 +0000 (0:00:01.152) 0:00:02.848 ********* 2025-05-28 16:51:21.512317 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-05-28 16:51:21.513465 | orchestrator | 2025-05-28 16:51:21.514526 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-05-28 16:51:21.515432 | orchestrator | Wednesday 28 May 2025 16:51:21 +0000 (0:00:01.143) 0:00:03.992 ********* 2025-05-28 16:51:21.882733 | orchestrator | ok: [testbed-manager] 2025-05-28 16:51:21.883702 | orchestrator | 2025-05-28 16:51:21.884401 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-05-28 16:51:21.884731 | orchestrator | Wednesday 28 May 2025 16:51:21 +0000 (0:00:00.372) 0:00:04.364 ********* 2025-05-28 16:51:22.803969 | orchestrator | changed: [testbed-manager] 2025-05-28 16:51:22.804097 | orchestrator | 2025-05-28 16:51:22.804870 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-05-28 16:51:22.805939 | orchestrator | Wednesday 28 May 2025 16:51:22 +0000 (0:00:00.919) 0:00:05.284 ********* 2025-05-28 16:51:54.245695 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-05-28 16:51:54.245814 | orchestrator | ok: [testbed-manager] 2025-05-28 16:51:54.245829 | orchestrator | 2025-05-28 16:51:54.245894 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-05-28 16:51:54.247231 | orchestrator | Wednesday 28 May 2025 16:51:54 +0000 (0:00:31.439) 0:00:36.723 ********* 2025-05-28 16:52:06.201989 | orchestrator | changed: [testbed-manager] 2025-05-28 16:52:06.203684 | orchestrator | 2025-05-28 16:52:06.203719 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-05-28 16:52:06.203733 | orchestrator | Wednesday 28 May 2025 16:52:06 +0000 (0:00:11.958) 0:00:48.682 ********* 2025-05-28 16:53:06.283556 | orchestrator | Pausing for 60 seconds 2025-05-28 16:53:06.283704 | orchestrator | changed: [testbed-manager] 2025-05-28 16:53:06.283722 | orchestrator | 2025-05-28 16:53:06.283736 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-05-28 16:53:06.283750 | orchestrator | Wednesday 28 May 2025 16:53:06 +0000 (0:01:00.079) 0:01:48.762 ********* 2025-05-28 16:53:06.354879 | orchestrator | ok: [testbed-manager] 2025-05-28 16:53:06.355568 | orchestrator | 2025-05-28 16:53:06.356474 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-05-28 16:53:06.356902 | orchestrator | Wednesday 28 May 2025 16:53:06 +0000 (0:00:00.075) 0:01:48.838 ********* 2025-05-28 16:53:07.005951 | orchestrator | changed: [testbed-manager] 2025-05-28 16:53:07.006242 | orchestrator | 2025-05-28 16:53:07.007880 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 16:53:07.007938 | orchestrator | 2025-05-28 16:53:07 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-28 16:53:07.007955 | orchestrator | 2025-05-28 16:53:07 | INFO  | Please wait and do not abort execution. 2025-05-28 16:53:07.008823 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 16:53:07.010003 | orchestrator | 2025-05-28 16:53:07.010590 | orchestrator | 2025-05-28 16:53:07.011338 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 16:53:07.012074 | orchestrator | Wednesday 28 May 2025 16:53:06 +0000 (0:00:00.650) 0:01:49.488 ********* 2025-05-28 16:53:07.012930 | orchestrator | =============================================================================== 2025-05-28 16:53:07.013675 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2025-05-28 16:53:07.014566 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 31.44s 2025-05-28 16:53:07.014859 | orchestrator | osism.services.squid : Restart squid service --------------------------- 11.96s 2025-05-28 16:53:07.016094 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.42s 2025-05-28 16:53:07.016565 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.15s 2025-05-28 16:53:07.017506 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.14s 2025-05-28 16:53:07.018138 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.92s 2025-05-28 16:53:07.019043 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.65s 2025-05-28 16:53:07.020068 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.37s 2025-05-28 16:53:07.020424 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.09s 2025-05-28 16:53:07.020886 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.08s 2025-05-28 16:53:07.500813 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-05-28 16:53:07.500932 | orchestrator | ++ semver latest 9.0.0 2025-05-28 16:53:07.549888 | orchestrator | + [[ -1 -lt 0 ]] 2025-05-28 16:53:07.549985 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-05-28 16:53:07.550114 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-05-28 16:53:09.258215 | orchestrator | Registering Redlock._acquired_script 2025-05-28 16:53:09.258374 | orchestrator | Registering Redlock._extend_script 2025-05-28 16:53:09.258389 | orchestrator | Registering Redlock._release_script 2025-05-28 16:53:09.322361 | orchestrator | 2025-05-28 16:53:09 | INFO  | Task 0e7f0ef2-e026-4164-9507-6c87d0172072 (operator) was prepared for execution. 2025-05-28 16:53:09.322517 | orchestrator | 2025-05-28 16:53:09 | INFO  | It takes a moment until task 0e7f0ef2-e026-4164-9507-6c87d0172072 (operator) has been started and output is visible here. 2025-05-28 16:53:13.353094 | orchestrator | 2025-05-28 16:53:13.353812 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-05-28 16:53:13.356182 | orchestrator | 2025-05-28 16:53:13.356953 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-28 16:53:13.357928 | orchestrator | Wednesday 28 May 2025 16:53:13 +0000 (0:00:00.162) 0:00:00.162 ********* 2025-05-28 16:53:16.715434 | orchestrator | ok: [testbed-node-2] 2025-05-28 16:53:16.715542 | orchestrator | ok: [testbed-node-1] 2025-05-28 16:53:16.715902 | orchestrator | ok: [testbed-node-3] 2025-05-28 16:53:16.716404 | orchestrator | ok: [testbed-node-0] 2025-05-28 16:53:16.716791 | orchestrator | ok: [testbed-node-5] 2025-05-28 16:53:16.718292 | orchestrator | ok: [testbed-node-4] 2025-05-28 16:53:16.718487 | orchestrator | 2025-05-28 16:53:16.719192 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-05-28 16:53:16.719419 | orchestrator | Wednesday 28 May 2025 16:53:16 +0000 (0:00:03.369) 0:00:03.532 ********* 2025-05-28 16:53:17.473871 | orchestrator | ok: [testbed-node-1] 2025-05-28 16:53:17.474634 | orchestrator | ok: [testbed-node-2] 2025-05-28 16:53:17.475799 | orchestrator | ok: [testbed-node-5] 2025-05-28 16:53:17.477740 | orchestrator | ok: [testbed-node-0] 2025-05-28 16:53:17.477760 | orchestrator | ok: [testbed-node-3] 2025-05-28 16:53:17.478077 | orchestrator | ok: [testbed-node-4] 2025-05-28 16:53:17.479532 | orchestrator | 2025-05-28 16:53:17.480434 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-05-28 16:53:17.481441 | orchestrator | 2025-05-28 16:53:17.481909 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-05-28 16:53:17.482712 | orchestrator | Wednesday 28 May 2025 16:53:17 +0000 (0:00:00.757) 0:00:04.289 ********* 2025-05-28 16:53:17.544725 | orchestrator | ok: [testbed-node-0] 2025-05-28 16:53:17.575176 | orchestrator | ok: [testbed-node-1] 2025-05-28 16:53:17.604510 | orchestrator | ok: [testbed-node-2] 2025-05-28 16:53:17.653013 | orchestrator | ok: [testbed-node-3] 2025-05-28 16:53:17.653239 | orchestrator | ok: [testbed-node-4] 2025-05-28 16:53:17.654002 | orchestrator | ok: [testbed-node-5] 2025-05-28 16:53:17.654608 | orchestrator | 2025-05-28 16:53:17.658117 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-05-28 16:53:17.658142 | orchestrator | Wednesday 28 May 2025 16:53:17 +0000 (0:00:00.179) 0:00:04.469 ********* 2025-05-28 16:53:17.738135 | orchestrator | ok: [testbed-node-0] 2025-05-28 16:53:17.764342 | orchestrator | ok: [testbed-node-1] 2025-05-28 16:53:17.815144 | orchestrator | ok: [testbed-node-2] 2025-05-28 16:53:17.816574 | orchestrator | ok: [testbed-node-3] 2025-05-28 16:53:17.816595 | orchestrator | ok: [testbed-node-4] 2025-05-28 16:53:17.817185 | orchestrator | ok: [testbed-node-5] 2025-05-28 16:53:17.817649 | orchestrator | 2025-05-28 16:53:17.818006 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-05-28 16:53:17.818438 | orchestrator | Wednesday 28 May 2025 16:53:17 +0000 (0:00:00.162) 0:00:04.632 ********* 2025-05-28 16:53:18.439752 | orchestrator | changed: [testbed-node-2] 2025-05-28 16:53:18.439864 | orchestrator | changed: [testbed-node-3] 2025-05-28 16:53:18.440348 | orchestrator | changed: [testbed-node-0] 2025-05-28 16:53:18.440818 | orchestrator | changed: [testbed-node-1] 2025-05-28 16:53:18.441445 | orchestrator | changed: [testbed-node-5] 2025-05-28 16:53:18.442235 | orchestrator | changed: [testbed-node-4] 2025-05-28 16:53:18.442627 | orchestrator | 2025-05-28 16:53:18.443499 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-05-28 16:53:18.443965 | orchestrator | Wednesday 28 May 2025 16:53:18 +0000 (0:00:00.621) 0:00:05.253 ********* 2025-05-28 16:53:19.269432 | orchestrator | changed: [testbed-node-3] 2025-05-28 16:53:19.269545 | orchestrator | changed: [testbed-node-4] 2025-05-28 16:53:19.269560 | orchestrator | changed: [testbed-node-5] 2025-05-28 16:53:19.269572 | orchestrator | changed: [testbed-node-2] 2025-05-28 16:53:19.269582 | orchestrator | changed: [testbed-node-0] 2025-05-28 16:53:19.269593 | orchestrator | changed: [testbed-node-1] 2025-05-28 16:53:19.269667 | orchestrator | 2025-05-28 16:53:19.269947 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-05-28 16:53:19.270345 | orchestrator | Wednesday 28 May 2025 16:53:19 +0000 (0:00:00.830) 0:00:06.084 ********* 2025-05-28 16:53:20.477518 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-05-28 16:53:20.481302 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-05-28 16:53:20.481366 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-05-28 16:53:20.483424 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-05-28 16:53:20.484589 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-05-28 16:53:20.485736 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-05-28 16:53:20.487143 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-05-28 16:53:20.488992 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-05-28 16:53:20.489396 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-05-28 16:53:20.490960 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-05-28 16:53:20.492175 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-05-28 16:53:20.494308 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-05-28 16:53:20.495205 | orchestrator | 2025-05-28 16:53:20.496654 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-05-28 16:53:20.497708 | orchestrator | Wednesday 28 May 2025 16:53:20 +0000 (0:00:01.207) 0:00:07.291 ********* 2025-05-28 16:53:21.697681 | orchestrator | changed: [testbed-node-1] 2025-05-28 16:53:21.697992 | orchestrator | changed: [testbed-node-2] 2025-05-28 16:53:21.698675 | orchestrator | changed: [testbed-node-0] 2025-05-28 16:53:21.699653 | orchestrator | changed: [testbed-node-5] 2025-05-28 16:53:21.701844 | orchestrator | changed: [testbed-node-3] 2025-05-28 16:53:21.701867 | orchestrator | changed: [testbed-node-4] 2025-05-28 16:53:21.701879 | orchestrator | 2025-05-28 16:53:21.701923 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-05-28 16:53:21.701936 | orchestrator | Wednesday 28 May 2025 16:53:21 +0000 (0:00:01.221) 0:00:08.512 ********* 2025-05-28 16:53:22.885094 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-05-28 16:53:22.885344 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-05-28 16:53:22.886289 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-05-28 16:53:23.174717 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-05-28 16:53:23.175867 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-05-28 16:53:23.176058 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-05-28 16:53:23.177400 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-05-28 16:53:23.177849 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-05-28 16:53:23.179073 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-05-28 16:53:23.180292 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-05-28 16:53:23.180490 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-05-28 16:53:23.181332 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-05-28 16:53:23.182182 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-05-28 16:53:23.183664 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-05-28 16:53:23.184768 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-05-28 16:53:23.184794 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-05-28 16:53:23.184997 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-05-28 16:53:23.185016 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-05-28 16:53:23.185852 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-05-28 16:53:23.186252 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-05-28 16:53:23.186692 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-05-28 16:53:23.187045 | orchestrator | 2025-05-28 16:53:23.187629 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-05-28 16:53:23.188510 | orchestrator | Wednesday 28 May 2025 16:53:23 +0000 (0:00:01.476) 0:00:09.989 ********* 2025-05-28 16:53:23.791572 | orchestrator | changed: [testbed-node-3] 2025-05-28 16:53:23.794556 | orchestrator | changed: [testbed-node-2] 2025-05-28 16:53:23.794588 | orchestrator | changed: [testbed-node-0] 2025-05-28 16:53:23.795503 | orchestrator | changed: [testbed-node-5] 2025-05-28 16:53:23.796466 | orchestrator | changed: [testbed-node-1] 2025-05-28 16:53:23.797440 | orchestrator | changed: [testbed-node-4] 2025-05-28 16:53:23.798314 | orchestrator | 2025-05-28 16:53:23.799397 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-05-28 16:53:23.802234 | orchestrator | Wednesday 28 May 2025 16:53:23 +0000 (0:00:00.616) 0:00:10.605 ********* 2025-05-28 16:53:23.881194 | orchestrator | skipping: [testbed-node-0] 2025-05-28 16:53:23.906923 | orchestrator | skipping: [testbed-node-1] 2025-05-28 16:53:23.963930 | orchestrator | skipping: [testbed-node-2] 2025-05-28 16:53:23.965210 | orchestrator | skipping: [testbed-node-3] 2025-05-28 16:53:23.965229 | orchestrator | skipping: [testbed-node-4] 2025-05-28 16:53:23.965628 | orchestrator | skipping: [testbed-node-5] 2025-05-28 16:53:23.966533 | orchestrator | 2025-05-28 16:53:23.967226 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-05-28 16:53:23.967539 | orchestrator | Wednesday 28 May 2025 16:53:23 +0000 (0:00:00.175) 0:00:10.780 ********* 2025-05-28 16:53:24.691357 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-05-28 16:53:24.691533 | orchestrator | changed: [testbed-node-1] 2025-05-28 16:53:24.692067 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-28 16:53:24.696019 | orchestrator | changed: [testbed-node-0] 2025-05-28 16:53:24.696351 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-05-28 16:53:24.697135 | orchestrator | changed: [testbed-node-2] 2025-05-28 16:53:24.698107 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-28 16:53:24.699155 | orchestrator | changed: [testbed-node-3] 2025-05-28 16:53:24.699730 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-28 16:53:24.700294 | orchestrator | changed: [testbed-node-5] 2025-05-28 16:53:24.701168 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-28 16:53:24.702094 | orchestrator | changed: [testbed-node-4] 2025-05-28 16:53:24.702673 | orchestrator | 2025-05-28 16:53:24.703619 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-05-28 16:53:24.704402 | orchestrator | Wednesday 28 May 2025 16:53:24 +0000 (0:00:00.726) 0:00:11.507 ********* 2025-05-28 16:53:24.734183 | orchestrator | skipping: [testbed-node-0] 2025-05-28 16:53:24.760198 | orchestrator | skipping: [testbed-node-1] 2025-05-28 16:53:24.805039 | orchestrator | skipping: [testbed-node-2] 2025-05-28 16:53:24.833612 | orchestrator | skipping: [testbed-node-3] 2025-05-28 16:53:24.833761 | orchestrator | skipping: [testbed-node-4] 2025-05-28 16:53:24.834236 | orchestrator | skipping: [testbed-node-5] 2025-05-28 16:53:24.834759 | orchestrator | 2025-05-28 16:53:24.835433 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-05-28 16:53:24.835850 | orchestrator | Wednesday 28 May 2025 16:53:24 +0000 (0:00:00.143) 0:00:11.650 ********* 2025-05-28 16:53:24.907490 | orchestrator | skipping: [testbed-node-0] 2025-05-28 16:53:24.926496 | orchestrator | skipping: [testbed-node-1] 2025-05-28 16:53:24.946400 | orchestrator | skipping: [testbed-node-2] 2025-05-28 16:53:24.988209 | orchestrator | skipping: [testbed-node-3] 2025-05-28 16:53:24.988502 | orchestrator | skipping: [testbed-node-4] 2025-05-28 16:53:24.989545 | orchestrator | skipping: [testbed-node-5] 2025-05-28 16:53:24.990576 | orchestrator | 2025-05-28 16:53:24.990909 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-05-28 16:53:24.991679 | orchestrator | Wednesday 28 May 2025 16:53:24 +0000 (0:00:00.153) 0:00:11.804 ********* 2025-05-28 16:53:25.067777 | orchestrator | skipping: [testbed-node-0] 2025-05-28 16:53:25.088236 | orchestrator | skipping: [testbed-node-1] 2025-05-28 16:53:25.108644 | orchestrator | skipping: [testbed-node-2] 2025-05-28 16:53:25.143165 | orchestrator | skipping: [testbed-node-3] 2025-05-28 16:53:25.143359 | orchestrator | skipping: [testbed-node-4] 2025-05-28 16:53:25.143885 | orchestrator | skipping: [testbed-node-5] 2025-05-28 16:53:25.144054 | orchestrator | 2025-05-28 16:53:25.144519 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-05-28 16:53:25.144845 | orchestrator | Wednesday 28 May 2025 16:53:25 +0000 (0:00:00.155) 0:00:11.959 ********* 2025-05-28 16:53:25.842684 | orchestrator | changed: [testbed-node-0] 2025-05-28 16:53:25.843815 | orchestrator | changed: [testbed-node-2] 2025-05-28 16:53:25.845882 | orchestrator | changed: [testbed-node-1] 2025-05-28 16:53:25.846677 | orchestrator | changed: [testbed-node-3] 2025-05-28 16:53:25.847613 | orchestrator | changed: [testbed-node-4] 2025-05-28 16:53:25.848570 | orchestrator | changed: [testbed-node-5] 2025-05-28 16:53:25.849126 | orchestrator | 2025-05-28 16:53:25.850098 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-05-28 16:53:25.850983 | orchestrator | Wednesday 28 May 2025 16:53:25 +0000 (0:00:00.697) 0:00:12.657 ********* 2025-05-28 16:53:25.937572 | orchestrator | skipping: [testbed-node-0] 2025-05-28 16:53:25.964835 | orchestrator | skipping: [testbed-node-1] 2025-05-28 16:53:26.071802 | orchestrator | skipping: [testbed-node-2] 2025-05-28 16:53:26.072832 | orchestrator | skipping: [testbed-node-3] 2025-05-28 16:53:26.074076 | orchestrator | skipping: [testbed-node-4] 2025-05-28 16:53:26.074938 | orchestrator | skipping: [testbed-node-5] 2025-05-28 16:53:26.075838 | orchestrator | 2025-05-28 16:53:26.077087 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 16:53:26.078310 | orchestrator | 2025-05-28 16:53:26 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-28 16:53:26.079305 | orchestrator | 2025-05-28 16:53:26 | INFO  | Please wait and do not abort execution. 2025-05-28 16:53:26.080929 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-28 16:53:26.082448 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-28 16:53:26.083636 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-28 16:53:26.084433 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-28 16:53:26.084868 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-28 16:53:26.085632 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-28 16:53:26.086519 | orchestrator | 2025-05-28 16:53:26.086878 | orchestrator | 2025-05-28 16:53:26.087658 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 16:53:26.088321 | orchestrator | Wednesday 28 May 2025 16:53:26 +0000 (0:00:00.231) 0:00:12.888 ********* 2025-05-28 16:53:26.089042 | orchestrator | =============================================================================== 2025-05-28 16:53:26.090993 | orchestrator | Gathering Facts --------------------------------------------------------- 3.37s 2025-05-28 16:53:26.091571 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.48s 2025-05-28 16:53:26.092252 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.22s 2025-05-28 16:53:26.093022 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.21s 2025-05-28 16:53:26.093901 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.83s 2025-05-28 16:53:26.094921 | orchestrator | Do not require tty for all users ---------------------------------------- 0.76s 2025-05-28 16:53:26.095661 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.73s 2025-05-28 16:53:26.096510 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.70s 2025-05-28 16:53:26.097124 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.62s 2025-05-28 16:53:26.097666 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.62s 2025-05-28 16:53:26.098128 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.23s 2025-05-28 16:53:26.098757 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.18s 2025-05-28 16:53:26.099417 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.18s 2025-05-28 16:53:26.100046 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.16s 2025-05-28 16:53:26.100746 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.16s 2025-05-28 16:53:26.102141 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.15s 2025-05-28 16:53:26.102335 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.14s 2025-05-28 16:53:26.555928 | orchestrator | + osism apply --environment custom facts 2025-05-28 16:53:28.235928 | orchestrator | 2025-05-28 16:53:28 | INFO  | Trying to run play facts in environment custom 2025-05-28 16:53:28.240255 | orchestrator | Registering Redlock._acquired_script 2025-05-28 16:53:28.240355 | orchestrator | Registering Redlock._extend_script 2025-05-28 16:53:28.240367 | orchestrator | Registering Redlock._release_script 2025-05-28 16:53:28.299433 | orchestrator | 2025-05-28 16:53:28 | INFO  | Task e808caa4-4f12-4b8e-b6a5-354ac250d102 (facts) was prepared for execution. 2025-05-28 16:53:28.299555 | orchestrator | 2025-05-28 16:53:28 | INFO  | It takes a moment until task e808caa4-4f12-4b8e-b6a5-354ac250d102 (facts) has been started and output is visible here. 2025-05-28 16:53:32.159769 | orchestrator | 2025-05-28 16:53:32.160907 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-05-28 16:53:32.163824 | orchestrator | 2025-05-28 16:53:32.163868 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-05-28 16:53:32.163882 | orchestrator | Wednesday 28 May 2025 16:53:32 +0000 (0:00:00.089) 0:00:00.089 ********* 2025-05-28 16:53:33.558623 | orchestrator | ok: [testbed-manager] 2025-05-28 16:53:33.558818 | orchestrator | changed: [testbed-node-3] 2025-05-28 16:53:33.559217 | orchestrator | changed: [testbed-node-1] 2025-05-28 16:53:33.560302 | orchestrator | changed: [testbed-node-2] 2025-05-28 16:53:33.560631 | orchestrator | changed: [testbed-node-0] 2025-05-28 16:53:33.561112 | orchestrator | changed: [testbed-node-5] 2025-05-28 16:53:33.561578 | orchestrator | changed: [testbed-node-4] 2025-05-28 16:53:33.561798 | orchestrator | 2025-05-28 16:53:33.562443 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-05-28 16:53:33.562761 | orchestrator | Wednesday 28 May 2025 16:53:33 +0000 (0:00:01.396) 0:00:01.485 ********* 2025-05-28 16:53:34.758889 | orchestrator | ok: [testbed-manager] 2025-05-28 16:53:34.759091 | orchestrator | changed: [testbed-node-2] 2025-05-28 16:53:34.759603 | orchestrator | changed: [testbed-node-1] 2025-05-28 16:53:34.760663 | orchestrator | changed: [testbed-node-0] 2025-05-28 16:53:34.762109 | orchestrator | changed: [testbed-node-3] 2025-05-28 16:53:34.763118 | orchestrator | changed: [testbed-node-4] 2025-05-28 16:53:34.763149 | orchestrator | changed: [testbed-node-5] 2025-05-28 16:53:34.763479 | orchestrator | 2025-05-28 16:53:34.764287 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-05-28 16:53:34.764789 | orchestrator | 2025-05-28 16:53:34.765462 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-05-28 16:53:34.765794 | orchestrator | Wednesday 28 May 2025 16:53:34 +0000 (0:00:01.203) 0:00:02.689 ********* 2025-05-28 16:53:34.826164 | orchestrator | ok: [testbed-node-3] 2025-05-28 16:53:34.875496 | orchestrator | ok: [testbed-node-4] 2025-05-28 16:53:34.875922 | orchestrator | ok: [testbed-node-5] 2025-05-28 16:53:34.876541 | orchestrator | 2025-05-28 16:53:34.877840 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-05-28 16:53:34.878293 | orchestrator | Wednesday 28 May 2025 16:53:34 +0000 (0:00:00.120) 0:00:02.809 ********* 2025-05-28 16:53:35.087975 | orchestrator | ok: [testbed-node-3] 2025-05-28 16:53:35.088832 | orchestrator | ok: [testbed-node-4] 2025-05-28 16:53:35.089634 | orchestrator | ok: [testbed-node-5] 2025-05-28 16:53:35.090330 | orchestrator | 2025-05-28 16:53:35.091132 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-05-28 16:53:35.091926 | orchestrator | Wednesday 28 May 2025 16:53:35 +0000 (0:00:00.210) 0:00:03.019 ********* 2025-05-28 16:53:35.317347 | orchestrator | ok: [testbed-node-3] 2025-05-28 16:53:35.317506 | orchestrator | ok: [testbed-node-4] 2025-05-28 16:53:35.317523 | orchestrator | ok: [testbed-node-5] 2025-05-28 16:53:35.317651 | orchestrator | 2025-05-28 16:53:35.317675 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-05-28 16:53:35.319306 | orchestrator | Wednesday 28 May 2025 16:53:35 +0000 (0:00:00.229) 0:00:03.249 ********* 2025-05-28 16:53:35.452799 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 16:53:35.453035 | orchestrator | 2025-05-28 16:53:35.453816 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-05-28 16:53:35.456704 | orchestrator | Wednesday 28 May 2025 16:53:35 +0000 (0:00:00.135) 0:00:03.385 ********* 2025-05-28 16:53:35.900236 | orchestrator | ok: [testbed-node-3] 2025-05-28 16:53:35.900779 | orchestrator | ok: [testbed-node-4] 2025-05-28 16:53:35.901788 | orchestrator | ok: [testbed-node-5] 2025-05-28 16:53:35.902763 | orchestrator | 2025-05-28 16:53:35.903522 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-05-28 16:53:35.904154 | orchestrator | Wednesday 28 May 2025 16:53:35 +0000 (0:00:00.446) 0:00:03.831 ********* 2025-05-28 16:53:36.015162 | orchestrator | skipping: [testbed-node-3] 2025-05-28 16:53:36.016515 | orchestrator | skipping: [testbed-node-4] 2025-05-28 16:53:36.017804 | orchestrator | skipping: [testbed-node-5] 2025-05-28 16:53:36.019021 | orchestrator | 2025-05-28 16:53:36.020019 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-05-28 16:53:36.021298 | orchestrator | Wednesday 28 May 2025 16:53:36 +0000 (0:00:00.116) 0:00:03.947 ********* 2025-05-28 16:53:37.068093 | orchestrator | changed: [testbed-node-3] 2025-05-28 16:53:37.069013 | orchestrator | changed: [testbed-node-4] 2025-05-28 16:53:37.069850 | orchestrator | changed: [testbed-node-5] 2025-05-28 16:53:37.070670 | orchestrator | 2025-05-28 16:53:37.071649 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-05-28 16:53:37.072570 | orchestrator | Wednesday 28 May 2025 16:53:37 +0000 (0:00:01.051) 0:00:04.999 ********* 2025-05-28 16:53:37.545853 | orchestrator | ok: [testbed-node-3] 2025-05-28 16:53:37.547029 | orchestrator | ok: [testbed-node-4] 2025-05-28 16:53:37.547564 | orchestrator | ok: [testbed-node-5] 2025-05-28 16:53:37.548763 | orchestrator | 2025-05-28 16:53:37.549400 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-05-28 16:53:37.549947 | orchestrator | Wednesday 28 May 2025 16:53:37 +0000 (0:00:00.478) 0:00:05.477 ********* 2025-05-28 16:53:38.624915 | orchestrator | changed: [testbed-node-3] 2025-05-28 16:53:38.625118 | orchestrator | changed: [testbed-node-4] 2025-05-28 16:53:38.625853 | orchestrator | changed: [testbed-node-5] 2025-05-28 16:53:38.630382 | orchestrator | 2025-05-28 16:53:38.631571 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-05-28 16:53:38.635822 | orchestrator | Wednesday 28 May 2025 16:53:38 +0000 (0:00:01.078) 0:00:06.556 ********* 2025-05-28 16:53:53.163630 | orchestrator | changed: [testbed-node-3] 2025-05-28 16:53:53.163833 | orchestrator | changed: [testbed-node-4] 2025-05-28 16:53:53.165038 | orchestrator | changed: [testbed-node-5] 2025-05-28 16:53:53.165644 | orchestrator | 2025-05-28 16:53:53.167418 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-05-28 16:53:53.168025 | orchestrator | Wednesday 28 May 2025 16:53:53 +0000 (0:00:14.537) 0:00:21.094 ********* 2025-05-28 16:53:53.224979 | orchestrator | skipping: [testbed-node-3] 2025-05-28 16:53:53.263445 | orchestrator | skipping: [testbed-node-4] 2025-05-28 16:53:53.264464 | orchestrator | skipping: [testbed-node-5] 2025-05-28 16:53:53.265772 | orchestrator | 2025-05-28 16:53:53.265797 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-05-28 16:53:53.266710 | orchestrator | Wednesday 28 May 2025 16:53:53 +0000 (0:00:00.100) 0:00:21.195 ********* 2025-05-28 16:54:01.492416 | orchestrator | changed: [testbed-node-3] 2025-05-28 16:54:01.492671 | orchestrator | changed: [testbed-node-4] 2025-05-28 16:54:01.494494 | orchestrator | changed: [testbed-node-5] 2025-05-28 16:54:01.495348 | orchestrator | 2025-05-28 16:54:01.496521 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-05-28 16:54:01.497313 | orchestrator | Wednesday 28 May 2025 16:54:01 +0000 (0:00:08.227) 0:00:29.423 ********* 2025-05-28 16:54:01.998465 | orchestrator | ok: [testbed-node-4] 2025-05-28 16:54:02.000740 | orchestrator | ok: [testbed-node-5] 2025-05-28 16:54:02.000819 | orchestrator | ok: [testbed-node-3] 2025-05-28 16:54:02.002315 | orchestrator | 2025-05-28 16:54:02.002640 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-05-28 16:54:02.003579 | orchestrator | Wednesday 28 May 2025 16:54:01 +0000 (0:00:00.507) 0:00:29.930 ********* 2025-05-28 16:54:05.544694 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-05-28 16:54:05.550793 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-05-28 16:54:05.550910 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-05-28 16:54:05.551601 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-05-28 16:54:05.552589 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-05-28 16:54:05.553117 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-05-28 16:54:05.553891 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-05-28 16:54:05.554598 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-05-28 16:54:05.555340 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-05-28 16:54:05.555805 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-05-28 16:54:05.556415 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-05-28 16:54:05.556866 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-05-28 16:54:05.557392 | orchestrator | 2025-05-28 16:54:05.557869 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-05-28 16:54:05.558578 | orchestrator | Wednesday 28 May 2025 16:54:05 +0000 (0:00:03.540) 0:00:33.471 ********* 2025-05-28 16:54:06.945200 | orchestrator | ok: [testbed-node-3] 2025-05-28 16:54:06.948078 | orchestrator | ok: [testbed-node-4] 2025-05-28 16:54:06.948114 | orchestrator | ok: [testbed-node-5] 2025-05-28 16:54:06.948450 | orchestrator | 2025-05-28 16:54:06.950594 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-28 16:54:06.954305 | orchestrator | 2025-05-28 16:54:06.954351 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-28 16:54:06.954364 | orchestrator | Wednesday 28 May 2025 16:54:06 +0000 (0:00:01.403) 0:00:34.874 ********* 2025-05-28 16:54:10.807463 | orchestrator | ok: [testbed-node-0] 2025-05-28 16:54:10.807931 | orchestrator | ok: [testbed-node-1] 2025-05-28 16:54:10.809088 | orchestrator | ok: [testbed-node-2] 2025-05-28 16:54:10.809575 | orchestrator | ok: [testbed-manager] 2025-05-28 16:54:10.810667 | orchestrator | ok: [testbed-node-5] 2025-05-28 16:54:10.811751 | orchestrator | ok: [testbed-node-3] 2025-05-28 16:54:10.812492 | orchestrator | ok: [testbed-node-4] 2025-05-28 16:54:10.813475 | orchestrator | 2025-05-28 16:54:10.814186 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 16:54:10.814674 | orchestrator | 2025-05-28 16:54:10 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-28 16:54:10.815028 | orchestrator | 2025-05-28 16:54:10 | INFO  | Please wait and do not abort execution. 2025-05-28 16:54:10.816668 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 16:54:10.818166 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 16:54:10.818339 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 16:54:10.819704 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 16:54:10.821427 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 16:54:10.821578 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 16:54:10.822130 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 16:54:10.822956 | orchestrator | 2025-05-28 16:54:10.823424 | orchestrator | 2025-05-28 16:54:10.824232 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 16:54:10.824598 | orchestrator | Wednesday 28 May 2025 16:54:10 +0000 (0:00:03.864) 0:00:38.739 ********* 2025-05-28 16:54:10.825507 | orchestrator | =============================================================================== 2025-05-28 16:54:10.827050 | orchestrator | osism.commons.repository : Update package cache ------------------------ 14.54s 2025-05-28 16:54:10.827568 | orchestrator | Install required packages (Debian) -------------------------------------- 8.23s 2025-05-28 16:54:10.828387 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.86s 2025-05-28 16:54:10.828744 | orchestrator | Copy fact files --------------------------------------------------------- 3.54s 2025-05-28 16:54:10.829234 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.40s 2025-05-28 16:54:10.829826 | orchestrator | Create custom facts directory ------------------------------------------- 1.40s 2025-05-28 16:54:10.830326 | orchestrator | Copy fact file ---------------------------------------------------------- 1.20s 2025-05-28 16:54:10.830989 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.08s 2025-05-28 16:54:10.831340 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.05s 2025-05-28 16:54:10.831887 | orchestrator | Create custom facts directory ------------------------------------------- 0.51s 2025-05-28 16:54:10.832814 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.48s 2025-05-28 16:54:10.832980 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.45s 2025-05-28 16:54:10.833477 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.23s 2025-05-28 16:54:10.833726 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.21s 2025-05-28 16:54:10.834109 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.14s 2025-05-28 16:54:10.834753 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.12s 2025-05-28 16:54:10.835058 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.12s 2025-05-28 16:54:10.835596 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.10s 2025-05-28 16:54:11.343125 | orchestrator | + osism apply bootstrap 2025-05-28 16:54:12.995269 | orchestrator | Registering Redlock._acquired_script 2025-05-28 16:54:12.995460 | orchestrator | Registering Redlock._extend_script 2025-05-28 16:54:12.995475 | orchestrator | Registering Redlock._release_script 2025-05-28 16:54:13.060778 | orchestrator | 2025-05-28 16:54:13 | INFO  | Task 58490e6d-576b-4ef6-bf3b-998b73a21eb3 (bootstrap) was prepared for execution. 2025-05-28 16:54:13.060885 | orchestrator | 2025-05-28 16:54:13 | INFO  | It takes a moment until task 58490e6d-576b-4ef6-bf3b-998b73a21eb3 (bootstrap) has been started and output is visible here. 2025-05-28 16:54:17.205707 | orchestrator | 2025-05-28 16:54:17.205859 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-05-28 16:54:17.207616 | orchestrator | 2025-05-28 16:54:17.207823 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-05-28 16:54:17.208896 | orchestrator | Wednesday 28 May 2025 16:54:17 +0000 (0:00:00.167) 0:00:00.167 ********* 2025-05-28 16:54:17.279403 | orchestrator | ok: [testbed-manager] 2025-05-28 16:54:17.306091 | orchestrator | ok: [testbed-node-3] 2025-05-28 16:54:17.339882 | orchestrator | ok: [testbed-node-4] 2025-05-28 16:54:17.371986 | orchestrator | ok: [testbed-node-5] 2025-05-28 16:54:17.460473 | orchestrator | ok: [testbed-node-0] 2025-05-28 16:54:17.462844 | orchestrator | ok: [testbed-node-1] 2025-05-28 16:54:17.468063 | orchestrator | ok: [testbed-node-2] 2025-05-28 16:54:17.468109 | orchestrator | 2025-05-28 16:54:17.468148 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-28 16:54:17.468161 | orchestrator | 2025-05-28 16:54:17.468675 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-28 16:54:17.469558 | orchestrator | Wednesday 28 May 2025 16:54:17 +0000 (0:00:00.258) 0:00:00.426 ********* 2025-05-28 16:54:21.178449 | orchestrator | ok: [testbed-node-2] 2025-05-28 16:54:21.178952 | orchestrator | ok: [testbed-node-1] 2025-05-28 16:54:21.180129 | orchestrator | ok: [testbed-node-0] 2025-05-28 16:54:21.180931 | orchestrator | ok: [testbed-manager] 2025-05-28 16:54:21.182322 | orchestrator | ok: [testbed-node-3] 2025-05-28 16:54:21.183719 | orchestrator | ok: [testbed-node-4] 2025-05-28 16:54:21.183761 | orchestrator | ok: [testbed-node-5] 2025-05-28 16:54:21.184101 | orchestrator | 2025-05-28 16:54:21.184959 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-05-28 16:54:21.185561 | orchestrator | 2025-05-28 16:54:21.186312 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-28 16:54:21.186785 | orchestrator | Wednesday 28 May 2025 16:54:21 +0000 (0:00:03.717) 0:00:04.144 ********* 2025-05-28 16:54:21.271596 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-05-28 16:54:21.272473 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-05-28 16:54:21.272668 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-05-28 16:54:21.307691 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-28 16:54:21.307755 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-05-28 16:54:21.307859 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-28 16:54:21.309857 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-05-28 16:54:21.356109 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-28 16:54:21.356178 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-05-28 16:54:21.359354 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-05-28 16:54:21.359469 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-05-28 16:54:21.360198 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-28 16:54:21.360383 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-05-28 16:54:21.361205 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-28 16:54:21.606226 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-05-28 16:54:21.606466 | orchestrator | skipping: [testbed-manager] 2025-05-28 16:54:21.606490 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-05-28 16:54:21.608391 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-28 16:54:21.608764 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-28 16:54:21.609901 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-28 16:54:21.610271 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-28 16:54:21.611323 | orchestrator | skipping: [testbed-node-3] 2025-05-28 16:54:21.611719 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-28 16:54:21.612305 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-05-28 16:54:21.612821 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-28 16:54:21.613961 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-28 16:54:21.614096 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-28 16:54:21.614730 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-28 16:54:21.615064 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-28 16:54:21.616631 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-05-28 16:54:21.616669 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-28 16:54:21.616740 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-28 16:54:21.617107 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-28 16:54:21.617607 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-28 16:54:21.617972 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-28 16:54:21.618300 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-28 16:54:21.618726 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-28 16:54:21.618953 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-28 16:54:21.619254 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-28 16:54:21.619672 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-28 16:54:21.620134 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-28 16:54:21.620470 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-28 16:54:21.620975 | orchestrator | skipping: [testbed-node-0] 2025-05-28 16:54:21.621302 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-28 16:54:21.621840 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-28 16:54:21.622110 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-28 16:54:21.622769 | orchestrator | skipping: [testbed-node-4] 2025-05-28 16:54:21.622960 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-28 16:54:21.623825 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-28 16:54:21.623914 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-28 16:54:21.624149 | orchestrator | skipping: [testbed-node-5] 2025-05-28 16:54:21.624635 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-28 16:54:21.624883 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-28 16:54:21.625377 | orchestrator | skipping: [testbed-node-1] 2025-05-28 16:54:21.625607 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-28 16:54:21.625977 | orchestrator | skipping: [testbed-node-2] 2025-05-28 16:54:21.626322 | orchestrator | 2025-05-28 16:54:21.626637 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-05-28 16:54:21.626994 | orchestrator | 2025-05-28 16:54:21.627902 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-05-28 16:54:21.627927 | orchestrator | Wednesday 28 May 2025 16:54:21 +0000 (0:00:00.426) 0:00:04.571 ********* 2025-05-28 16:54:22.917518 | orchestrator | ok: [testbed-node-2] 2025-05-28 16:54:22.917643 | orchestrator | ok: [testbed-manager] 2025-05-28 16:54:22.918431 | orchestrator | ok: [testbed-node-5] 2025-05-28 16:54:22.919499 | orchestrator | ok: [testbed-node-1] 2025-05-28 16:54:22.921647 | orchestrator | ok: [testbed-node-0] 2025-05-28 16:54:22.921892 | orchestrator | ok: [testbed-node-4] 2025-05-28 16:54:22.923589 | orchestrator | ok: [testbed-node-3] 2025-05-28 16:54:22.924473 | orchestrator | 2025-05-28 16:54:22.925684 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-05-28 16:54:22.926265 | orchestrator | Wednesday 28 May 2025 16:54:22 +0000 (0:00:01.310) 0:00:05.881 ********* 2025-05-28 16:54:24.149380 | orchestrator | ok: [testbed-manager] 2025-05-28 16:54:24.151000 | orchestrator | ok: [testbed-node-1] 2025-05-28 16:54:24.152179 | orchestrator | ok: [testbed-node-3] 2025-05-28 16:54:24.153616 | orchestrator | ok: [testbed-node-4] 2025-05-28 16:54:24.154736 | orchestrator | ok: [testbed-node-5] 2025-05-28 16:54:24.156669 | orchestrator | ok: [testbed-node-0] 2025-05-28 16:54:24.157748 | orchestrator | ok: [testbed-node-2] 2025-05-28 16:54:24.159242 | orchestrator | 2025-05-28 16:54:24.160404 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-05-28 16:54:24.161400 | orchestrator | Wednesday 28 May 2025 16:54:24 +0000 (0:00:01.231) 0:00:07.113 ********* 2025-05-28 16:54:24.421976 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 16:54:24.422154 | orchestrator | 2025-05-28 16:54:24.425983 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-05-28 16:54:24.426078 | orchestrator | Wednesday 28 May 2025 16:54:24 +0000 (0:00:00.273) 0:00:07.387 ********* 2025-05-28 16:54:26.450951 | orchestrator | changed: [testbed-manager] 2025-05-28 16:54:26.452491 | orchestrator | changed: [testbed-node-4] 2025-05-28 16:54:26.456038 | orchestrator | changed: [testbed-node-5] 2025-05-28 16:54:26.457881 | orchestrator | changed: [testbed-node-1] 2025-05-28 16:54:26.457908 | orchestrator | changed: [testbed-node-3] 2025-05-28 16:54:26.457920 | orchestrator | changed: [testbed-node-2] 2025-05-28 16:54:26.458485 | orchestrator | changed: [testbed-node-0] 2025-05-28 16:54:26.458832 | orchestrator | 2025-05-28 16:54:26.459639 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-05-28 16:54:26.459986 | orchestrator | Wednesday 28 May 2025 16:54:26 +0000 (0:00:02.026) 0:00:09.413 ********* 2025-05-28 16:54:26.511892 | orchestrator | skipping: [testbed-manager] 2025-05-28 16:54:26.726164 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 16:54:26.726788 | orchestrator | 2025-05-28 16:54:26.730953 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-05-28 16:54:26.731010 | orchestrator | Wednesday 28 May 2025 16:54:26 +0000 (0:00:00.277) 0:00:09.691 ********* 2025-05-28 16:54:27.722844 | orchestrator | changed: [testbed-node-3] 2025-05-28 16:54:27.722960 | orchestrator | changed: [testbed-node-5] 2025-05-28 16:54:27.723037 | orchestrator | changed: [testbed-node-4] 2025-05-28 16:54:27.723534 | orchestrator | changed: [testbed-node-2] 2025-05-28 16:54:27.725521 | orchestrator | changed: [testbed-node-0] 2025-05-28 16:54:27.726611 | orchestrator | changed: [testbed-node-1] 2025-05-28 16:54:27.727365 | orchestrator | 2025-05-28 16:54:27.728451 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-05-28 16:54:27.729261 | orchestrator | Wednesday 28 May 2025 16:54:27 +0000 (0:00:00.996) 0:00:10.687 ********* 2025-05-28 16:54:27.805380 | orchestrator | skipping: [testbed-manager] 2025-05-28 16:54:28.340336 | orchestrator | changed: [testbed-node-2] 2025-05-28 16:54:28.340710 | orchestrator | changed: [testbed-node-1] 2025-05-28 16:54:28.343600 | orchestrator | changed: [testbed-node-4] 2025-05-28 16:54:28.344495 | orchestrator | changed: [testbed-node-3] 2025-05-28 16:54:28.345095 | orchestrator | changed: [testbed-node-5] 2025-05-28 16:54:28.346356 | orchestrator | changed: [testbed-node-0] 2025-05-28 16:54:28.347013 | orchestrator | 2025-05-28 16:54:28.348456 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-05-28 16:54:28.349740 | orchestrator | Wednesday 28 May 2025 16:54:28 +0000 (0:00:00.617) 0:00:11.305 ********* 2025-05-28 16:54:28.439943 | orchestrator | skipping: [testbed-node-3] 2025-05-28 16:54:28.479704 | orchestrator | skipping: [testbed-node-4] 2025-05-28 16:54:28.515440 | orchestrator | skipping: [testbed-node-5] 2025-05-28 16:54:28.776389 | orchestrator | skipping: [testbed-node-0] 2025-05-28 16:54:28.777209 | orchestrator | skipping: [testbed-node-1] 2025-05-28 16:54:28.780738 | orchestrator | skipping: [testbed-node-2] 2025-05-28 16:54:28.780769 | orchestrator | ok: [testbed-manager] 2025-05-28 16:54:28.780782 | orchestrator | 2025-05-28 16:54:28.781882 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-05-28 16:54:28.783074 | orchestrator | Wednesday 28 May 2025 16:54:28 +0000 (0:00:00.436) 0:00:11.742 ********* 2025-05-28 16:54:28.847526 | orchestrator | skipping: [testbed-manager] 2025-05-28 16:54:28.872925 | orchestrator | skipping: [testbed-node-3] 2025-05-28 16:54:28.892826 | orchestrator | skipping: [testbed-node-4] 2025-05-28 16:54:28.916749 | orchestrator | skipping: [testbed-node-5] 2025-05-28 16:54:28.974470 | orchestrator | skipping: [testbed-node-0] 2025-05-28 16:54:28.976888 | orchestrator | skipping: [testbed-node-1] 2025-05-28 16:54:28.976913 | orchestrator | skipping: [testbed-node-2] 2025-05-28 16:54:28.976925 | orchestrator | 2025-05-28 16:54:28.977836 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-05-28 16:54:28.978593 | orchestrator | Wednesday 28 May 2025 16:54:28 +0000 (0:00:00.195) 0:00:11.937 ********* 2025-05-28 16:54:29.259040 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 16:54:29.263735 | orchestrator | 2025-05-28 16:54:29.263872 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-05-28 16:54:29.263893 | orchestrator | Wednesday 28 May 2025 16:54:29 +0000 (0:00:00.286) 0:00:12.224 ********* 2025-05-28 16:54:29.572237 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 16:54:29.574004 | orchestrator | 2025-05-28 16:54:29.574569 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-05-28 16:54:29.575350 | orchestrator | Wednesday 28 May 2025 16:54:29 +0000 (0:00:00.311) 0:00:12.536 ********* 2025-05-28 16:54:31.161791 | orchestrator | ok: [testbed-manager] 2025-05-28 16:54:31.161971 | orchestrator | ok: [testbed-node-3] 2025-05-28 16:54:31.162641 | orchestrator | ok: [testbed-node-1] 2025-05-28 16:54:31.163710 | orchestrator | ok: [testbed-node-2] 2025-05-28 16:54:31.164632 | orchestrator | ok: [testbed-node-0] 2025-05-28 16:54:31.165400 | orchestrator | ok: [testbed-node-5] 2025-05-28 16:54:31.165856 | orchestrator | ok: [testbed-node-4] 2025-05-28 16:54:31.166881 | orchestrator | 2025-05-28 16:54:31.168005 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-05-28 16:54:31.168796 | orchestrator | Wednesday 28 May 2025 16:54:31 +0000 (0:00:01.589) 0:00:14.125 ********* 2025-05-28 16:54:31.241084 | orchestrator | skipping: [testbed-manager] 2025-05-28 16:54:31.267205 | orchestrator | skipping: [testbed-node-3] 2025-05-28 16:54:31.301132 | orchestrator | skipping: [testbed-node-4] 2025-05-28 16:54:31.318473 | orchestrator | skipping: [testbed-node-5] 2025-05-28 16:54:31.382580 | orchestrator | skipping: [testbed-node-0] 2025-05-28 16:54:31.382734 | orchestrator | skipping: [testbed-node-1] 2025-05-28 16:54:31.383542 | orchestrator | skipping: [testbed-node-2] 2025-05-28 16:54:31.384537 | orchestrator | 2025-05-28 16:54:31.385128 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-05-28 16:54:31.385679 | orchestrator | Wednesday 28 May 2025 16:54:31 +0000 (0:00:00.222) 0:00:14.348 ********* 2025-05-28 16:54:31.922224 | orchestrator | ok: [testbed-manager] 2025-05-28 16:54:31.923074 | orchestrator | ok: [testbed-node-3] 2025-05-28 16:54:31.923741 | orchestrator | ok: [testbed-node-0] 2025-05-28 16:54:31.925339 | orchestrator | ok: [testbed-node-4] 2025-05-28 16:54:31.925999 | orchestrator | ok: [testbed-node-5] 2025-05-28 16:54:31.927579 | orchestrator | ok: [testbed-node-1] 2025-05-28 16:54:31.927769 | orchestrator | ok: [testbed-node-2] 2025-05-28 16:54:31.928962 | orchestrator | 2025-05-28 16:54:31.929533 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-05-28 16:54:31.930278 | orchestrator | Wednesday 28 May 2025 16:54:31 +0000 (0:00:00.536) 0:00:14.884 ********* 2025-05-28 16:54:32.024766 | orchestrator | skipping: [testbed-manager] 2025-05-28 16:54:32.056954 | orchestrator | skipping: [testbed-node-3] 2025-05-28 16:54:32.085154 | orchestrator | skipping: [testbed-node-4] 2025-05-28 16:54:32.117843 | orchestrator | skipping: [testbed-node-5] 2025-05-28 16:54:32.196453 | orchestrator | skipping: [testbed-node-0] 2025-05-28 16:54:32.200755 | orchestrator | skipping: [testbed-node-1] 2025-05-28 16:54:32.200794 | orchestrator | skipping: [testbed-node-2] 2025-05-28 16:54:32.200816 | orchestrator | 2025-05-28 16:54:32.201483 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-05-28 16:54:32.202200 | orchestrator | Wednesday 28 May 2025 16:54:32 +0000 (0:00:00.277) 0:00:15.162 ********* 2025-05-28 16:54:32.758879 | orchestrator | ok: [testbed-manager] 2025-05-28 16:54:32.760205 | orchestrator | changed: [testbed-node-3] 2025-05-28 16:54:32.760870 | orchestrator | changed: [testbed-node-4] 2025-05-28 16:54:32.761548 | orchestrator | changed: [testbed-node-0] 2025-05-28 16:54:32.763019 | orchestrator | changed: [testbed-node-2] 2025-05-28 16:54:32.763652 | orchestrator | changed: [testbed-node-5] 2025-05-28 16:54:32.764413 | orchestrator | changed: [testbed-node-1] 2025-05-28 16:54:32.765002 | orchestrator | 2025-05-28 16:54:32.766562 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-05-28 16:54:32.769784 | orchestrator | Wednesday 28 May 2025 16:54:32 +0000 (0:00:00.561) 0:00:15.723 ********* 2025-05-28 16:54:34.098561 | orchestrator | ok: [testbed-manager] 2025-05-28 16:54:34.098745 | orchestrator | changed: [testbed-node-3] 2025-05-28 16:54:34.103427 | orchestrator | changed: [testbed-node-4] 2025-05-28 16:54:34.103707 | orchestrator | changed: [testbed-node-1] 2025-05-28 16:54:34.105278 | orchestrator | changed: [testbed-node-0] 2025-05-28 16:54:34.105769 | orchestrator | changed: [testbed-node-5] 2025-05-28 16:54:34.106745 | orchestrator | changed: [testbed-node-2] 2025-05-28 16:54:34.107602 | orchestrator | 2025-05-28 16:54:34.108617 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-05-28 16:54:34.109188 | orchestrator | Wednesday 28 May 2025 16:54:34 +0000 (0:00:01.339) 0:00:17.062 ********* 2025-05-28 16:54:35.246275 | orchestrator | ok: [testbed-node-1] 2025-05-28 16:54:35.247857 | orchestrator | ok: [testbed-node-4] 2025-05-28 16:54:35.248722 | orchestrator | ok: [testbed-node-2] 2025-05-28 16:54:35.250357 | orchestrator | ok: [testbed-manager] 2025-05-28 16:54:35.251732 | orchestrator | ok: [testbed-node-5] 2025-05-28 16:54:35.252481 | orchestrator | ok: [testbed-node-3] 2025-05-28 16:54:35.253319 | orchestrator | ok: [testbed-node-0] 2025-05-28 16:54:35.254073 | orchestrator | 2025-05-28 16:54:35.254955 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-05-28 16:54:35.255812 | orchestrator | Wednesday 28 May 2025 16:54:35 +0000 (0:00:01.147) 0:00:18.210 ********* 2025-05-28 16:54:35.607974 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 16:54:35.609409 | orchestrator | 2025-05-28 16:54:35.611367 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-05-28 16:54:35.612506 | orchestrator | Wednesday 28 May 2025 16:54:35 +0000 (0:00:00.357) 0:00:18.568 ********* 2025-05-28 16:54:35.682141 | orchestrator | skipping: [testbed-manager] 2025-05-28 16:54:36.861763 | orchestrator | changed: [testbed-node-2] 2025-05-28 16:54:36.863600 | orchestrator | changed: [testbed-node-1] 2025-05-28 16:54:36.865013 | orchestrator | changed: [testbed-node-0] 2025-05-28 16:54:36.866117 | orchestrator | changed: [testbed-node-5] 2025-05-28 16:54:36.867622 | orchestrator | changed: [testbed-node-4] 2025-05-28 16:54:36.868967 | orchestrator | changed: [testbed-node-3] 2025-05-28 16:54:36.869908 | orchestrator | 2025-05-28 16:54:36.871005 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-05-28 16:54:36.871791 | orchestrator | Wednesday 28 May 2025 16:54:36 +0000 (0:00:01.258) 0:00:19.826 ********* 2025-05-28 16:54:36.936469 | orchestrator | ok: [testbed-manager] 2025-05-28 16:54:36.966588 | orchestrator | ok: [testbed-node-3] 2025-05-28 16:54:36.994601 | orchestrator | ok: [testbed-node-4] 2025-05-28 16:54:37.020170 | orchestrator | ok: [testbed-node-5] 2025-05-28 16:54:37.077634 | orchestrator | ok: [testbed-node-0] 2025-05-28 16:54:37.077920 | orchestrator | ok: [testbed-node-1] 2025-05-28 16:54:37.078276 | orchestrator | ok: [testbed-node-2] 2025-05-28 16:54:37.081776 | orchestrator | 2025-05-28 16:54:37.081817 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-05-28 16:54:37.082752 | orchestrator | Wednesday 28 May 2025 16:54:37 +0000 (0:00:00.217) 0:00:20.044 ********* 2025-05-28 16:54:37.162833 | orchestrator | ok: [testbed-manager] 2025-05-28 16:54:37.181082 | orchestrator | ok: [testbed-node-3] 2025-05-28 16:54:37.240509 | orchestrator | ok: [testbed-node-4] 2025-05-28 16:54:37.324429 | orchestrator | ok: [testbed-node-5] 2025-05-28 16:54:37.324545 | orchestrator | ok: [testbed-node-0] 2025-05-28 16:54:37.324642 | orchestrator | ok: [testbed-node-1] 2025-05-28 16:54:37.325089 | orchestrator | ok: [testbed-node-2] 2025-05-28 16:54:37.325772 | orchestrator | 2025-05-28 16:54:37.326101 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-05-28 16:54:37.326756 | orchestrator | Wednesday 28 May 2025 16:54:37 +0000 (0:00:00.246) 0:00:20.290 ********* 2025-05-28 16:54:37.447689 | orchestrator | ok: [testbed-manager] 2025-05-28 16:54:37.470489 | orchestrator | ok: [testbed-node-3] 2025-05-28 16:54:37.498218 | orchestrator | ok: [testbed-node-4] 2025-05-28 16:54:37.519039 | orchestrator | ok: [testbed-node-5] 2025-05-28 16:54:37.585417 | orchestrator | ok: [testbed-node-0] 2025-05-28 16:54:37.586644 | orchestrator | ok: [testbed-node-1] 2025-05-28 16:54:37.588445 | orchestrator | ok: [testbed-node-2] 2025-05-28 16:54:37.590012 | orchestrator | 2025-05-28 16:54:37.591193 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-05-28 16:54:37.591895 | orchestrator | Wednesday 28 May 2025 16:54:37 +0000 (0:00:00.259) 0:00:20.549 ********* 2025-05-28 16:54:37.850505 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 16:54:37.851455 | orchestrator | 2025-05-28 16:54:37.852469 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-05-28 16:54:37.853910 | orchestrator | Wednesday 28 May 2025 16:54:37 +0000 (0:00:00.265) 0:00:20.814 ********* 2025-05-28 16:54:38.369856 | orchestrator | ok: [testbed-manager] 2025-05-28 16:54:38.370905 | orchestrator | ok: [testbed-node-3] 2025-05-28 16:54:38.372245 | orchestrator | ok: [testbed-node-4] 2025-05-28 16:54:38.374095 | orchestrator | ok: [testbed-node-5] 2025-05-28 16:54:38.374813 | orchestrator | ok: [testbed-node-0] 2025-05-28 16:54:38.375996 | orchestrator | ok: [testbed-node-2] 2025-05-28 16:54:38.376791 | orchestrator | ok: [testbed-node-1] 2025-05-28 16:54:38.377672 | orchestrator | 2025-05-28 16:54:38.378463 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-05-28 16:54:38.379133 | orchestrator | Wednesday 28 May 2025 16:54:38 +0000 (0:00:00.519) 0:00:21.334 ********* 2025-05-28 16:54:38.466908 | orchestrator | skipping: [testbed-manager] 2025-05-28 16:54:38.492202 | orchestrator | skipping: [testbed-node-3] 2025-05-28 16:54:38.517457 | orchestrator | skipping: [testbed-node-4] 2025-05-28 16:54:38.597363 | orchestrator | skipping: [testbed-node-5] 2025-05-28 16:54:38.598270 | orchestrator | skipping: [testbed-node-0] 2025-05-28 16:54:38.599073 | orchestrator | skipping: [testbed-node-1] 2025-05-28 16:54:38.600020 | orchestrator | skipping: [testbed-node-2] 2025-05-28 16:54:38.600941 | orchestrator | 2025-05-28 16:54:38.601535 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-05-28 16:54:38.602600 | orchestrator | Wednesday 28 May 2025 16:54:38 +0000 (0:00:00.227) 0:00:21.562 ********* 2025-05-28 16:54:39.635047 | orchestrator | ok: [testbed-manager] 2025-05-28 16:54:39.636762 | orchestrator | ok: [testbed-node-3] 2025-05-28 16:54:39.637467 | orchestrator | ok: [testbed-node-4] 2025-05-28 16:54:39.638992 | orchestrator | ok: [testbed-node-5] 2025-05-28 16:54:39.639241 | orchestrator | changed: [testbed-node-2] 2025-05-28 16:54:39.641405 | orchestrator | changed: [testbed-node-0] 2025-05-28 16:54:39.641845 | orchestrator | changed: [testbed-node-1] 2025-05-28 16:54:39.642981 | orchestrator | 2025-05-28 16:54:39.643422 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-05-28 16:54:39.644372 | orchestrator | Wednesday 28 May 2025 16:54:39 +0000 (0:00:01.036) 0:00:22.599 ********* 2025-05-28 16:54:40.173637 | orchestrator | ok: [testbed-manager] 2025-05-28 16:54:40.174570 | orchestrator | ok: [testbed-node-3] 2025-05-28 16:54:40.175730 | orchestrator | ok: [testbed-node-4] 2025-05-28 16:54:40.177017 | orchestrator | ok: [testbed-node-5] 2025-05-28 16:54:40.177986 | orchestrator | ok: [testbed-node-1] 2025-05-28 16:54:40.178754 | orchestrator | ok: [testbed-node-2] 2025-05-28 16:54:40.179185 | orchestrator | ok: [testbed-node-0] 2025-05-28 16:54:40.179963 | orchestrator | 2025-05-28 16:54:40.180522 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-05-28 16:54:40.180770 | orchestrator | Wednesday 28 May 2025 16:54:40 +0000 (0:00:00.539) 0:00:23.139 ********* 2025-05-28 16:54:41.283927 | orchestrator | ok: [testbed-manager] 2025-05-28 16:54:41.284056 | orchestrator | ok: [testbed-node-3] 2025-05-28 16:54:41.284788 | orchestrator | ok: [testbed-node-4] 2025-05-28 16:54:41.285372 | orchestrator | ok: [testbed-node-5] 2025-05-28 16:54:41.286417 | orchestrator | changed: [testbed-node-1] 2025-05-28 16:54:41.286997 | orchestrator | changed: [testbed-node-2] 2025-05-28 16:54:41.287469 | orchestrator | changed: [testbed-node-0] 2025-05-28 16:54:41.288043 | orchestrator | 2025-05-28 16:54:41.288561 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-05-28 16:54:41.289504 | orchestrator | Wednesday 28 May 2025 16:54:41 +0000 (0:00:01.108) 0:00:24.247 ********* 2025-05-28 16:54:55.668471 | orchestrator | ok: [testbed-node-3] 2025-05-28 16:54:55.668613 | orchestrator | ok: [testbed-node-5] 2025-05-28 16:54:55.668628 | orchestrator | ok: [testbed-node-4] 2025-05-28 16:54:55.668640 | orchestrator | changed: [testbed-manager] 2025-05-28 16:54:55.669612 | orchestrator | changed: [testbed-node-1] 2025-05-28 16:54:55.670459 | orchestrator | changed: [testbed-node-0] 2025-05-28 16:54:55.671679 | orchestrator | changed: [testbed-node-2] 2025-05-28 16:54:55.672499 | orchestrator | 2025-05-28 16:54:55.672871 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-05-28 16:54:55.673873 | orchestrator | Wednesday 28 May 2025 16:54:55 +0000 (0:00:14.380) 0:00:38.628 ********* 2025-05-28 16:54:55.747067 | orchestrator | ok: [testbed-manager] 2025-05-28 16:54:55.775936 | orchestrator | ok: [testbed-node-3] 2025-05-28 16:54:55.813458 | orchestrator | ok: [testbed-node-4] 2025-05-28 16:54:55.834518 | orchestrator | ok: [testbed-node-5] 2025-05-28 16:54:55.902146 | orchestrator | ok: [testbed-node-0] 2025-05-28 16:54:55.905649 | orchestrator | ok: [testbed-node-1] 2025-05-28 16:54:55.905698 | orchestrator | ok: [testbed-node-2] 2025-05-28 16:54:55.906669 | orchestrator | 2025-05-28 16:54:55.907814 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-05-28 16:54:55.909199 | orchestrator | Wednesday 28 May 2025 16:54:55 +0000 (0:00:00.239) 0:00:38.867 ********* 2025-05-28 16:54:55.978005 | orchestrator | ok: [testbed-manager] 2025-05-28 16:54:56.005246 | orchestrator | ok: [testbed-node-3] 2025-05-28 16:54:56.043622 | orchestrator | ok: [testbed-node-4] 2025-05-28 16:54:56.067399 | orchestrator | ok: [testbed-node-5] 2025-05-28 16:54:56.148186 | orchestrator | ok: [testbed-node-0] 2025-05-28 16:54:56.149459 | orchestrator | ok: [testbed-node-1] 2025-05-28 16:54:56.150638 | orchestrator | ok: [testbed-node-2] 2025-05-28 16:54:56.151508 | orchestrator | 2025-05-28 16:54:56.152394 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-05-28 16:54:56.152782 | orchestrator | Wednesday 28 May 2025 16:54:56 +0000 (0:00:00.245) 0:00:39.113 ********* 2025-05-28 16:54:56.228905 | orchestrator | ok: [testbed-manager] 2025-05-28 16:54:56.255609 | orchestrator | ok: [testbed-node-3] 2025-05-28 16:54:56.281419 | orchestrator | ok: [testbed-node-4] 2025-05-28 16:54:56.305183 | orchestrator | ok: [testbed-node-5] 2025-05-28 16:54:56.379054 | orchestrator | ok: [testbed-node-0] 2025-05-28 16:54:56.379243 | orchestrator | ok: [testbed-node-1] 2025-05-28 16:54:56.379424 | orchestrator | ok: [testbed-node-2] 2025-05-28 16:54:56.380096 | orchestrator | 2025-05-28 16:54:56.383196 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-05-28 16:54:56.384245 | orchestrator | Wednesday 28 May 2025 16:54:56 +0000 (0:00:00.232) 0:00:39.345 ********* 2025-05-28 16:54:56.653521 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 16:54:56.653640 | orchestrator | 2025-05-28 16:54:56.653656 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-05-28 16:54:56.653671 | orchestrator | Wednesday 28 May 2025 16:54:56 +0000 (0:00:00.271) 0:00:39.616 ********* 2025-05-28 16:54:58.442467 | orchestrator | ok: [testbed-manager] 2025-05-28 16:54:58.442595 | orchestrator | ok: [testbed-node-2] 2025-05-28 16:54:58.443411 | orchestrator | ok: [testbed-node-0] 2025-05-28 16:54:58.446103 | orchestrator | ok: [testbed-node-3] 2025-05-28 16:54:58.448825 | orchestrator | ok: [testbed-node-5] 2025-05-28 16:54:58.448847 | orchestrator | ok: [testbed-node-4] 2025-05-28 16:54:58.448858 | orchestrator | ok: [testbed-node-1] 2025-05-28 16:54:58.448870 | orchestrator | 2025-05-28 16:54:58.449778 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-05-28 16:54:58.452174 | orchestrator | Wednesday 28 May 2025 16:54:58 +0000 (0:00:01.782) 0:00:41.399 ********* 2025-05-28 16:54:59.590314 | orchestrator | changed: [testbed-manager] 2025-05-28 16:54:59.591667 | orchestrator | changed: [testbed-node-3] 2025-05-28 16:54:59.594197 | orchestrator | changed: [testbed-node-4] 2025-05-28 16:54:59.595004 | orchestrator | changed: [testbed-node-0] 2025-05-28 16:54:59.596077 | orchestrator | changed: [testbed-node-5] 2025-05-28 16:54:59.596937 | orchestrator | changed: [testbed-node-1] 2025-05-28 16:54:59.597666 | orchestrator | changed: [testbed-node-2] 2025-05-28 16:54:59.598434 | orchestrator | 2025-05-28 16:54:59.599603 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-05-28 16:54:59.600398 | orchestrator | Wednesday 28 May 2025 16:54:59 +0000 (0:00:01.155) 0:00:42.554 ********* 2025-05-28 16:55:00.387965 | orchestrator | ok: [testbed-manager] 2025-05-28 16:55:00.388596 | orchestrator | ok: [testbed-node-3] 2025-05-28 16:55:00.389229 | orchestrator | ok: [testbed-node-4] 2025-05-28 16:55:00.389874 | orchestrator | ok: [testbed-node-5] 2025-05-28 16:55:00.390699 | orchestrator | ok: [testbed-node-0] 2025-05-28 16:55:00.391529 | orchestrator | ok: [testbed-node-2] 2025-05-28 16:55:00.392001 | orchestrator | ok: [testbed-node-1] 2025-05-28 16:55:00.392894 | orchestrator | 2025-05-28 16:55:00.393598 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-05-28 16:55:00.393968 | orchestrator | Wednesday 28 May 2025 16:55:00 +0000 (0:00:00.796) 0:00:43.350 ********* 2025-05-28 16:55:00.702969 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 16:55:00.705962 | orchestrator | 2025-05-28 16:55:00.706004 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-05-28 16:55:00.706104 | orchestrator | Wednesday 28 May 2025 16:55:00 +0000 (0:00:00.316) 0:00:43.666 ********* 2025-05-28 16:55:01.740975 | orchestrator | changed: [testbed-manager] 2025-05-28 16:55:01.741133 | orchestrator | changed: [testbed-node-3] 2025-05-28 16:55:01.741159 | orchestrator | changed: [testbed-node-4] 2025-05-28 16:55:01.741718 | orchestrator | changed: [testbed-node-5] 2025-05-28 16:55:01.742604 | orchestrator | changed: [testbed-node-2] 2025-05-28 16:55:01.743357 | orchestrator | changed: [testbed-node-0] 2025-05-28 16:55:01.744447 | orchestrator | changed: [testbed-node-1] 2025-05-28 16:55:01.744867 | orchestrator | 2025-05-28 16:55:01.746803 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-05-28 16:55:01.746986 | orchestrator | Wednesday 28 May 2025 16:55:01 +0000 (0:00:01.034) 0:00:44.701 ********* 2025-05-28 16:55:01.843218 | orchestrator | skipping: [testbed-manager] 2025-05-28 16:55:01.879325 | orchestrator | skipping: [testbed-node-3] 2025-05-28 16:55:01.909166 | orchestrator | skipping: [testbed-node-4] 2025-05-28 16:55:02.056607 | orchestrator | skipping: [testbed-node-5] 2025-05-28 16:55:02.057742 | orchestrator | skipping: [testbed-node-0] 2025-05-28 16:55:02.059002 | orchestrator | skipping: [testbed-node-1] 2025-05-28 16:55:02.060366 | orchestrator | skipping: [testbed-node-2] 2025-05-28 16:55:02.065153 | orchestrator | 2025-05-28 16:55:02.065223 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-05-28 16:55:02.065239 | orchestrator | Wednesday 28 May 2025 16:55:02 +0000 (0:00:00.320) 0:00:45.021 ********* 2025-05-28 16:55:13.699369 | orchestrator | changed: [testbed-node-3] 2025-05-28 16:55:13.699527 | orchestrator | changed: [testbed-node-0] 2025-05-28 16:55:13.699841 | orchestrator | changed: [testbed-node-4] 2025-05-28 16:55:13.700647 | orchestrator | changed: [testbed-node-2] 2025-05-28 16:55:13.701335 | orchestrator | changed: [testbed-node-1] 2025-05-28 16:55:13.702310 | orchestrator | changed: [testbed-node-5] 2025-05-28 16:55:13.703319 | orchestrator | changed: [testbed-manager] 2025-05-28 16:55:13.703644 | orchestrator | 2025-05-28 16:55:13.704557 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-05-28 16:55:13.705811 | orchestrator | Wednesday 28 May 2025 16:55:13 +0000 (0:00:11.641) 0:00:56.663 ********* 2025-05-28 16:55:14.666807 | orchestrator | ok: [testbed-node-1] 2025-05-28 16:55:14.667909 | orchestrator | ok: [testbed-node-0] 2025-05-28 16:55:14.670477 | orchestrator | ok: [testbed-node-2] 2025-05-28 16:55:14.672023 | orchestrator | ok: [testbed-node-5] 2025-05-28 16:55:14.673654 | orchestrator | ok: [testbed-manager] 2025-05-28 16:55:14.674739 | orchestrator | ok: [testbed-node-4] 2025-05-28 16:55:14.675504 | orchestrator | ok: [testbed-node-3] 2025-05-28 16:55:14.676795 | orchestrator | 2025-05-28 16:55:14.676816 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-05-28 16:55:14.677100 | orchestrator | Wednesday 28 May 2025 16:55:14 +0000 (0:00:00.966) 0:00:57.629 ********* 2025-05-28 16:55:15.606782 | orchestrator | ok: [testbed-node-3] 2025-05-28 16:55:15.607683 | orchestrator | ok: [testbed-manager] 2025-05-28 16:55:15.608538 | orchestrator | ok: [testbed-node-4] 2025-05-28 16:55:15.609694 | orchestrator | ok: [testbed-node-5] 2025-05-28 16:55:15.610717 | orchestrator | ok: [testbed-node-0] 2025-05-28 16:55:15.612024 | orchestrator | ok: [testbed-node-1] 2025-05-28 16:55:15.612917 | orchestrator | ok: [testbed-node-2] 2025-05-28 16:55:15.613754 | orchestrator | 2025-05-28 16:55:15.614615 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-05-28 16:55:15.615391 | orchestrator | Wednesday 28 May 2025 16:55:15 +0000 (0:00:00.940) 0:00:58.569 ********* 2025-05-28 16:55:15.689039 | orchestrator | ok: [testbed-manager] 2025-05-28 16:55:15.735158 | orchestrator | ok: [testbed-node-3] 2025-05-28 16:55:15.771771 | orchestrator | ok: [testbed-node-4] 2025-05-28 16:55:15.803184 | orchestrator | ok: [testbed-node-5] 2025-05-28 16:55:15.865380 | orchestrator | ok: [testbed-node-0] 2025-05-28 16:55:15.870789 | orchestrator | ok: [testbed-node-1] 2025-05-28 16:55:15.874701 | orchestrator | ok: [testbed-node-2] 2025-05-28 16:55:15.876537 | orchestrator | 2025-05-28 16:55:15.879463 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-05-28 16:55:15.879486 | orchestrator | Wednesday 28 May 2025 16:55:15 +0000 (0:00:00.261) 0:00:58.831 ********* 2025-05-28 16:55:15.993036 | orchestrator | ok: [testbed-manager] 2025-05-28 16:55:16.026138 | orchestrator | ok: [testbed-node-3] 2025-05-28 16:55:16.068561 | orchestrator | ok: [testbed-node-4] 2025-05-28 16:55:16.092202 | orchestrator | ok: [testbed-node-5] 2025-05-28 16:55:16.171241 | orchestrator | ok: [testbed-node-0] 2025-05-28 16:55:16.172441 | orchestrator | ok: [testbed-node-1] 2025-05-28 16:55:16.173134 | orchestrator | ok: [testbed-node-2] 2025-05-28 16:55:16.174057 | orchestrator | 2025-05-28 16:55:16.174908 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-05-28 16:55:16.175764 | orchestrator | Wednesday 28 May 2025 16:55:16 +0000 (0:00:00.302) 0:00:59.134 ********* 2025-05-28 16:55:16.511835 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 16:55:16.511935 | orchestrator | 2025-05-28 16:55:16.512002 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-05-28 16:55:16.512426 | orchestrator | Wednesday 28 May 2025 16:55:16 +0000 (0:00:00.343) 0:00:59.478 ********* 2025-05-28 16:55:18.162170 | orchestrator | ok: [testbed-manager] 2025-05-28 16:55:18.162992 | orchestrator | ok: [testbed-node-3] 2025-05-28 16:55:18.163019 | orchestrator | ok: [testbed-node-1] 2025-05-28 16:55:18.165994 | orchestrator | ok: [testbed-node-0] 2025-05-28 16:55:18.170574 | orchestrator | ok: [testbed-node-5] 2025-05-28 16:55:18.176026 | orchestrator | ok: [testbed-node-2] 2025-05-28 16:55:18.177125 | orchestrator | ok: [testbed-node-4] 2025-05-28 16:55:18.178562 | orchestrator | 2025-05-28 16:55:18.180252 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-05-28 16:55:18.182594 | orchestrator | Wednesday 28 May 2025 16:55:18 +0000 (0:00:01.645) 0:01:01.124 ********* 2025-05-28 16:55:18.748056 | orchestrator | changed: [testbed-manager] 2025-05-28 16:55:18.749404 | orchestrator | changed: [testbed-node-0] 2025-05-28 16:55:18.750110 | orchestrator | changed: [testbed-node-4] 2025-05-28 16:55:18.750842 | orchestrator | changed: [testbed-node-1] 2025-05-28 16:55:18.753653 | orchestrator | changed: [testbed-node-5] 2025-05-28 16:55:18.753936 | orchestrator | changed: [testbed-node-3] 2025-05-28 16:55:18.755967 | orchestrator | changed: [testbed-node-2] 2025-05-28 16:55:18.758415 | orchestrator | 2025-05-28 16:55:18.759879 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-05-28 16:55:18.760941 | orchestrator | Wednesday 28 May 2025 16:55:18 +0000 (0:00:00.588) 0:01:01.712 ********* 2025-05-28 16:55:18.825792 | orchestrator | ok: [testbed-manager] 2025-05-28 16:55:18.854940 | orchestrator | ok: [testbed-node-3] 2025-05-28 16:55:18.889972 | orchestrator | ok: [testbed-node-4] 2025-05-28 16:55:18.918243 | orchestrator | ok: [testbed-node-5] 2025-05-28 16:55:18.996037 | orchestrator | ok: [testbed-node-0] 2025-05-28 16:55:18.996996 | orchestrator | ok: [testbed-node-1] 2025-05-28 16:55:18.997624 | orchestrator | ok: [testbed-node-2] 2025-05-28 16:55:18.998966 | orchestrator | 2025-05-28 16:55:18.999000 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-05-28 16:55:19.001897 | orchestrator | Wednesday 28 May 2025 16:55:18 +0000 (0:00:00.249) 0:01:01.961 ********* 2025-05-28 16:55:20.216154 | orchestrator | ok: [testbed-manager] 2025-05-28 16:55:20.217367 | orchestrator | ok: [testbed-node-3] 2025-05-28 16:55:20.218588 | orchestrator | ok: [testbed-node-0] 2025-05-28 16:55:20.219646 | orchestrator | ok: [testbed-node-5] 2025-05-28 16:55:20.220267 | orchestrator | ok: [testbed-node-4] 2025-05-28 16:55:20.221251 | orchestrator | ok: [testbed-node-1] 2025-05-28 16:55:20.221791 | orchestrator | ok: [testbed-node-2] 2025-05-28 16:55:20.222737 | orchestrator | 2025-05-28 16:55:20.224192 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-05-28 16:55:20.224820 | orchestrator | Wednesday 28 May 2025 16:55:20 +0000 (0:00:01.217) 0:01:03.179 ********* 2025-05-28 16:55:22.002794 | orchestrator | changed: [testbed-manager] 2025-05-28 16:55:22.002994 | orchestrator | changed: [testbed-node-3] 2025-05-28 16:55:22.003060 | orchestrator | changed: [testbed-node-0] 2025-05-28 16:55:22.004869 | orchestrator | changed: [testbed-node-1] 2025-05-28 16:55:22.006536 | orchestrator | changed: [testbed-node-4] 2025-05-28 16:55:22.007134 | orchestrator | changed: [testbed-node-2] 2025-05-28 16:55:22.008458 | orchestrator | changed: [testbed-node-5] 2025-05-28 16:55:22.010778 | orchestrator | 2025-05-28 16:55:22.011645 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-05-28 16:55:22.013482 | orchestrator | Wednesday 28 May 2025 16:55:21 +0000 (0:00:01.787) 0:01:04.967 ********* 2025-05-28 16:55:24.487155 | orchestrator | ok: [testbed-manager] 2025-05-28 16:55:24.487617 | orchestrator | ok: [testbed-node-1] 2025-05-28 16:55:24.488083 | orchestrator | ok: [testbed-node-0] 2025-05-28 16:55:24.489857 | orchestrator | ok: [testbed-node-3] 2025-05-28 16:55:24.491228 | orchestrator | ok: [testbed-node-2] 2025-05-28 16:55:24.492662 | orchestrator | ok: [testbed-node-4] 2025-05-28 16:55:24.494529 | orchestrator | ok: [testbed-node-5] 2025-05-28 16:55:24.495154 | orchestrator | 2025-05-28 16:55:24.495904 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-05-28 16:55:24.497361 | orchestrator | Wednesday 28 May 2025 16:55:24 +0000 (0:00:02.480) 0:01:07.447 ********* 2025-05-28 16:56:02.286279 | orchestrator | ok: [testbed-manager] 2025-05-28 16:56:02.287138 | orchestrator | ok: [testbed-node-2] 2025-05-28 16:56:02.288523 | orchestrator | ok: [testbed-node-0] 2025-05-28 16:56:02.291821 | orchestrator | ok: [testbed-node-5] 2025-05-28 16:56:02.292891 | orchestrator | ok: [testbed-node-4] 2025-05-28 16:56:02.294167 | orchestrator | ok: [testbed-node-1] 2025-05-28 16:56:02.294735 | orchestrator | ok: [testbed-node-3] 2025-05-28 16:56:02.295668 | orchestrator | 2025-05-28 16:56:02.296367 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-05-28 16:56:02.297040 | orchestrator | Wednesday 28 May 2025 16:56:02 +0000 (0:00:37.800) 0:01:45.248 ********* 2025-05-28 16:57:19.798656 | orchestrator | changed: [testbed-manager] 2025-05-28 16:57:19.798791 | orchestrator | changed: [testbed-node-0] 2025-05-28 16:57:19.799898 | orchestrator | changed: [testbed-node-3] 2025-05-28 16:57:19.800950 | orchestrator | changed: [testbed-node-1] 2025-05-28 16:57:19.801976 | orchestrator | changed: [testbed-node-5] 2025-05-28 16:57:19.804718 | orchestrator | changed: [testbed-node-2] 2025-05-28 16:57:19.804740 | orchestrator | changed: [testbed-node-4] 2025-05-28 16:57:19.804751 | orchestrator | 2025-05-28 16:57:19.804805 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-05-28 16:57:19.805885 | orchestrator | Wednesday 28 May 2025 16:57:19 +0000 (0:01:17.511) 0:03:02.760 ********* 2025-05-28 16:57:21.445111 | orchestrator | ok: [testbed-manager] 2025-05-28 16:57:21.445398 | orchestrator | ok: [testbed-node-3] 2025-05-28 16:57:21.447125 | orchestrator | ok: [testbed-node-2] 2025-05-28 16:57:21.447520 | orchestrator | ok: [testbed-node-5] 2025-05-28 16:57:21.448333 | orchestrator | ok: [testbed-node-1] 2025-05-28 16:57:21.448979 | orchestrator | ok: [testbed-node-4] 2025-05-28 16:57:21.449460 | orchestrator | ok: [testbed-node-0] 2025-05-28 16:57:21.450491 | orchestrator | 2025-05-28 16:57:21.450795 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-05-28 16:57:21.451675 | orchestrator | Wednesday 28 May 2025 16:57:21 +0000 (0:00:01.649) 0:03:04.409 ********* 2025-05-28 16:57:33.689916 | orchestrator | ok: [testbed-node-3] 2025-05-28 16:57:33.690246 | orchestrator | ok: [testbed-node-2] 2025-05-28 16:57:33.690321 | orchestrator | ok: [testbed-node-0] 2025-05-28 16:57:33.690412 | orchestrator | ok: [testbed-node-1] 2025-05-28 16:57:33.690922 | orchestrator | ok: [testbed-node-5] 2025-05-28 16:57:33.691971 | orchestrator | ok: [testbed-node-4] 2025-05-28 16:57:33.692697 | orchestrator | changed: [testbed-manager] 2025-05-28 16:57:33.694152 | orchestrator | 2025-05-28 16:57:33.694459 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-05-28 16:57:33.694856 | orchestrator | Wednesday 28 May 2025 16:57:33 +0000 (0:00:12.238) 0:03:16.648 ********* 2025-05-28 16:57:34.084216 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-05-28 16:57:34.084574 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-05-28 16:57:34.089073 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-05-28 16:57:34.089225 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-05-28 16:57:34.089243 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-05-28 16:57:34.089279 | orchestrator | 2025-05-28 16:57:34.089292 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-05-28 16:57:34.089364 | orchestrator | Wednesday 28 May 2025 16:57:34 +0000 (0:00:00.401) 0:03:17.049 ********* 2025-05-28 16:57:34.143058 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-28 16:57:34.177056 | orchestrator | skipping: [testbed-manager] 2025-05-28 16:57:34.177225 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-28 16:57:34.178135 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-28 16:57:34.205650 | orchestrator | skipping: [testbed-node-3] 2025-05-28 16:57:34.238288 | orchestrator | skipping: [testbed-node-4] 2025-05-28 16:57:34.238735 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-28 16:57:34.268416 | orchestrator | skipping: [testbed-node-5] 2025-05-28 16:57:35.829424 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-28 16:57:35.830431 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-28 16:57:35.831493 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-28 16:57:35.832317 | orchestrator | 2025-05-28 16:57:35.833147 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-05-28 16:57:35.834063 | orchestrator | Wednesday 28 May 2025 16:57:35 +0000 (0:00:01.743) 0:03:18.793 ********* 2025-05-28 16:57:35.888489 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-28 16:57:35.890136 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-28 16:57:35.891289 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-28 16:57:35.891764 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-28 16:57:35.928494 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-28 16:57:35.929882 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-28 16:57:35.930490 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-28 16:57:35.930518 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-28 16:57:35.930954 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-28 16:57:35.931422 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-28 16:57:35.933664 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-28 16:57:35.933696 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-28 16:57:35.933708 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-28 16:57:35.933752 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-28 16:57:35.986884 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-28 16:57:35.988121 | orchestrator | skipping: [testbed-manager] 2025-05-28 16:57:35.988227 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-28 16:57:35.989485 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-28 16:57:35.989979 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-28 16:57:35.991184 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-28 16:57:35.991637 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-28 16:57:35.992658 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-28 16:57:35.994140 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-28 16:57:35.994903 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-28 16:57:35.995019 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-28 16:57:35.996855 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-28 16:57:35.996984 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-28 16:57:35.998174 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-28 16:57:35.999219 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-28 16:57:35.999868 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-28 16:57:36.000837 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-28 16:57:36.001810 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-28 16:57:36.002957 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-28 16:57:36.003458 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-28 16:57:36.003871 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-28 16:57:36.004467 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-28 16:57:36.005478 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-28 16:57:36.029430 | orchestrator | skipping: [testbed-node-3] 2025-05-28 16:57:36.029809 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-28 16:57:36.031313 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-28 16:57:36.031331 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-28 16:57:36.031336 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-28 16:57:36.054653 | orchestrator | skipping: [testbed-node-4] 2025-05-28 16:57:42.630726 | orchestrator | skipping: [testbed-node-5] 2025-05-28 16:57:42.631597 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-05-28 16:57:42.631637 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-05-28 16:57:42.632202 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-05-28 16:57:42.633869 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-05-28 16:57:42.634282 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-05-28 16:57:42.634735 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-05-28 16:57:42.635023 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-05-28 16:57:42.635732 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-05-28 16:57:42.636088 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-05-28 16:57:42.636891 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-05-28 16:57:42.637363 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-05-28 16:57:42.638215 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-05-28 16:57:42.638917 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-05-28 16:57:42.639051 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-05-28 16:57:42.639833 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-05-28 16:57:42.640295 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-05-28 16:57:42.640765 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-05-28 16:57:42.641221 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-05-28 16:57:42.641766 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-05-28 16:57:42.642092 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-05-28 16:57:42.642736 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-05-28 16:57:42.643197 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-05-28 16:57:42.643447 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-05-28 16:57:42.644084 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-05-28 16:57:42.646543 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-05-28 16:57:42.646599 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-05-28 16:57:42.646973 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-05-28 16:57:42.649384 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-05-28 16:57:42.649414 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-05-28 16:57:42.649426 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-05-28 16:57:42.649438 | orchestrator | 2025-05-28 16:57:42.649450 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-05-28 16:57:42.649461 | orchestrator | Wednesday 28 May 2025 16:57:42 +0000 (0:00:06.797) 0:03:25.591 ********* 2025-05-28 16:57:43.281404 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-28 16:57:43.282559 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-28 16:57:43.285773 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-28 16:57:43.285808 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-28 16:57:43.285855 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-28 16:57:43.286405 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-28 16:57:43.287373 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-28 16:57:43.288119 | orchestrator | 2025-05-28 16:57:43.288870 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-05-28 16:57:43.289538 | orchestrator | Wednesday 28 May 2025 16:57:43 +0000 (0:00:00.655) 0:03:26.246 ********* 2025-05-28 16:57:43.342217 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-28 16:57:43.375726 | orchestrator | skipping: [testbed-manager] 2025-05-28 16:57:43.455087 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-28 16:57:43.807558 | orchestrator | skipping: [testbed-node-0] 2025-05-28 16:57:43.807690 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-28 16:57:43.807701 | orchestrator | skipping: [testbed-node-1] 2025-05-28 16:57:43.807742 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-28 16:57:43.808589 | orchestrator | skipping: [testbed-node-2] 2025-05-28 16:57:43.809093 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-05-28 16:57:43.810004 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-05-28 16:57:43.810711 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-05-28 16:57:43.811069 | orchestrator | 2025-05-28 16:57:43.811879 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-05-28 16:57:43.812542 | orchestrator | Wednesday 28 May 2025 16:57:43 +0000 (0:00:00.520) 0:03:26.767 ********* 2025-05-28 16:57:43.871061 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-28 16:57:43.903478 | orchestrator | skipping: [testbed-manager] 2025-05-28 16:57:43.983957 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-28 16:57:43.984070 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-28 16:57:44.393991 | orchestrator | skipping: [testbed-node-0] 2025-05-28 16:57:44.396073 | orchestrator | skipping: [testbed-node-1] 2025-05-28 16:57:44.397705 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-28 16:57:44.398909 | orchestrator | skipping: [testbed-node-2] 2025-05-28 16:57:44.400315 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-05-28 16:57:44.401457 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-05-28 16:57:44.402888 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-05-28 16:57:44.403911 | orchestrator | 2025-05-28 16:57:44.404691 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-05-28 16:57:44.405360 | orchestrator | Wednesday 28 May 2025 16:57:44 +0000 (0:00:00.592) 0:03:27.359 ********* 2025-05-28 16:57:44.492158 | orchestrator | skipping: [testbed-manager] 2025-05-28 16:57:44.511956 | orchestrator | skipping: [testbed-node-3] 2025-05-28 16:57:44.540527 | orchestrator | skipping: [testbed-node-4] 2025-05-28 16:57:44.565784 | orchestrator | skipping: [testbed-node-5] 2025-05-28 16:57:44.684639 | orchestrator | skipping: [testbed-node-0] 2025-05-28 16:57:44.685532 | orchestrator | skipping: [testbed-node-1] 2025-05-28 16:57:44.686637 | orchestrator | skipping: [testbed-node-2] 2025-05-28 16:57:44.688899 | orchestrator | 2025-05-28 16:57:44.690076 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-05-28 16:57:44.691008 | orchestrator | Wednesday 28 May 2025 16:57:44 +0000 (0:00:00.290) 0:03:27.649 ********* 2025-05-28 16:57:50.485493 | orchestrator | ok: [testbed-manager] 2025-05-28 16:57:50.486593 | orchestrator | ok: [testbed-node-0] 2025-05-28 16:57:50.486632 | orchestrator | ok: [testbed-node-2] 2025-05-28 16:57:50.488005 | orchestrator | ok: [testbed-node-3] 2025-05-28 16:57:50.488052 | orchestrator | ok: [testbed-node-5] 2025-05-28 16:57:50.489593 | orchestrator | ok: [testbed-node-1] 2025-05-28 16:57:50.489618 | orchestrator | ok: [testbed-node-4] 2025-05-28 16:57:50.489630 | orchestrator | 2025-05-28 16:57:50.490079 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-05-28 16:57:50.490341 | orchestrator | Wednesday 28 May 2025 16:57:50 +0000 (0:00:05.801) 0:03:33.450 ********* 2025-05-28 16:57:50.533430 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-05-28 16:57:50.567800 | orchestrator | skipping: [testbed-manager] 2025-05-28 16:57:50.618502 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-05-28 16:57:50.619503 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-05-28 16:57:50.652077 | orchestrator | skipping: [testbed-node-3] 2025-05-28 16:57:50.691578 | orchestrator | skipping: [testbed-node-4] 2025-05-28 16:57:50.692805 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-05-28 16:57:50.737354 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-05-28 16:57:50.737980 | orchestrator | skipping: [testbed-node-5] 2025-05-28 16:57:50.739151 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-05-28 16:57:50.798929 | orchestrator | skipping: [testbed-node-0] 2025-05-28 16:57:50.799012 | orchestrator | skipping: [testbed-node-1] 2025-05-28 16:57:50.799905 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-05-28 16:57:50.800545 | orchestrator | skipping: [testbed-node-2] 2025-05-28 16:57:50.801927 | orchestrator | 2025-05-28 16:57:50.801955 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-05-28 16:57:50.802420 | orchestrator | Wednesday 28 May 2025 16:57:50 +0000 (0:00:00.313) 0:03:33.764 ********* 2025-05-28 16:57:52.010630 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-05-28 16:57:52.010816 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-05-28 16:57:52.011362 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-05-28 16:57:52.015090 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-05-28 16:57:52.015714 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-05-28 16:57:52.016293 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-05-28 16:57:52.019094 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-05-28 16:57:52.019613 | orchestrator | 2025-05-28 16:57:52.020097 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-05-28 16:57:52.020737 | orchestrator | Wednesday 28 May 2025 16:57:52 +0000 (0:00:01.209) 0:03:34.974 ********* 2025-05-28 16:57:52.477322 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 16:57:52.481119 | orchestrator | 2025-05-28 16:57:52.481161 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-05-28 16:57:52.481176 | orchestrator | Wednesday 28 May 2025 16:57:52 +0000 (0:00:00.467) 0:03:35.441 ********* 2025-05-28 16:57:53.862333 | orchestrator | ok: [testbed-manager] 2025-05-28 16:57:53.862745 | orchestrator | ok: [testbed-node-3] 2025-05-28 16:57:53.863336 | orchestrator | ok: [testbed-node-5] 2025-05-28 16:57:53.863433 | orchestrator | ok: [testbed-node-0] 2025-05-28 16:57:53.864505 | orchestrator | ok: [testbed-node-4] 2025-05-28 16:57:53.867201 | orchestrator | ok: [testbed-node-1] 2025-05-28 16:57:53.867992 | orchestrator | ok: [testbed-node-2] 2025-05-28 16:57:53.868944 | orchestrator | 2025-05-28 16:57:53.869889 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-05-28 16:57:53.870659 | orchestrator | Wednesday 28 May 2025 16:57:53 +0000 (0:00:01.382) 0:03:36.824 ********* 2025-05-28 16:57:54.512137 | orchestrator | ok: [testbed-manager] 2025-05-28 16:57:54.512305 | orchestrator | ok: [testbed-node-3] 2025-05-28 16:57:54.512679 | orchestrator | ok: [testbed-node-4] 2025-05-28 16:57:54.512808 | orchestrator | ok: [testbed-node-5] 2025-05-28 16:57:54.513438 | orchestrator | ok: [testbed-node-0] 2025-05-28 16:57:54.514140 | orchestrator | ok: [testbed-node-1] 2025-05-28 16:57:54.514493 | orchestrator | ok: [testbed-node-2] 2025-05-28 16:57:54.515174 | orchestrator | 2025-05-28 16:57:54.515706 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-05-28 16:57:54.516412 | orchestrator | Wednesday 28 May 2025 16:57:54 +0000 (0:00:00.652) 0:03:37.477 ********* 2025-05-28 16:57:55.235583 | orchestrator | changed: [testbed-manager] 2025-05-28 16:57:55.235813 | orchestrator | changed: [testbed-node-3] 2025-05-28 16:57:55.236825 | orchestrator | changed: [testbed-node-4] 2025-05-28 16:57:55.238212 | orchestrator | changed: [testbed-node-5] 2025-05-28 16:57:55.239121 | orchestrator | changed: [testbed-node-0] 2025-05-28 16:57:55.240281 | orchestrator | changed: [testbed-node-1] 2025-05-28 16:57:55.240692 | orchestrator | changed: [testbed-node-2] 2025-05-28 16:57:55.241482 | orchestrator | 2025-05-28 16:57:55.242430 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-05-28 16:57:55.242892 | orchestrator | Wednesday 28 May 2025 16:57:55 +0000 (0:00:00.721) 0:03:38.199 ********* 2025-05-28 16:57:55.846108 | orchestrator | ok: [testbed-manager] 2025-05-28 16:57:55.846772 | orchestrator | ok: [testbed-node-0] 2025-05-28 16:57:55.847463 | orchestrator | ok: [testbed-node-5] 2025-05-28 16:57:55.850673 | orchestrator | ok: [testbed-node-4] 2025-05-28 16:57:55.850703 | orchestrator | ok: [testbed-node-1] 2025-05-28 16:57:55.850716 | orchestrator | ok: [testbed-node-2] 2025-05-28 16:57:55.851360 | orchestrator | ok: [testbed-node-3] 2025-05-28 16:57:55.851588 | orchestrator | 2025-05-28 16:57:55.851954 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-05-28 16:57:55.852305 | orchestrator | Wednesday 28 May 2025 16:57:55 +0000 (0:00:00.613) 0:03:38.812 ********* 2025-05-28 16:57:56.885093 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748449757.0836594, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-28 16:57:56.885897 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748449796.446703, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-28 16:57:56.886990 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748449786.728108, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-28 16:57:56.887982 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748449782.2316778, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-28 16:57:56.888988 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748449790.5452695, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-28 16:57:56.889708 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748449785.6163902, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-28 16:57:56.890546 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748449790.8057547, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-28 16:57:56.891034 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748449779.6116595, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-28 16:57:56.891992 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748449712.88867, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-28 16:57:56.892693 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748449705.556914, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-28 16:57:56.893157 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748449704.764041, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-28 16:57:56.893967 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748449702.2660635, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-28 16:57:56.894649 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748449709.7886117, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-28 16:57:56.895083 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748449710.0702906, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-28 16:57:56.895493 | orchestrator | 2025-05-28 16:57:56.896332 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-05-28 16:57:56.896676 | orchestrator | Wednesday 28 May 2025 16:57:56 +0000 (0:00:01.037) 0:03:39.849 ********* 2025-05-28 16:57:58.091468 | orchestrator | changed: [testbed-manager] 2025-05-28 16:57:58.095344 | orchestrator | changed: [testbed-node-4] 2025-05-28 16:57:58.095712 | orchestrator | changed: [testbed-node-3] 2025-05-28 16:57:58.096954 | orchestrator | changed: [testbed-node-0] 2025-05-28 16:57:58.097973 | orchestrator | changed: [testbed-node-5] 2025-05-28 16:57:58.099422 | orchestrator | changed: [testbed-node-1] 2025-05-28 16:57:58.099467 | orchestrator | changed: [testbed-node-2] 2025-05-28 16:57:58.099801 | orchestrator | 2025-05-28 16:57:58.100500 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-05-28 16:57:58.101700 | orchestrator | Wednesday 28 May 2025 16:57:58 +0000 (0:00:01.203) 0:03:41.053 ********* 2025-05-28 16:57:59.256645 | orchestrator | changed: [testbed-manager] 2025-05-28 16:57:59.258885 | orchestrator | changed: [testbed-node-3] 2025-05-28 16:57:59.258940 | orchestrator | changed: [testbed-node-0] 2025-05-28 16:57:59.261358 | orchestrator | changed: [testbed-node-5] 2025-05-28 16:57:59.262883 | orchestrator | changed: [testbed-node-4] 2025-05-28 16:57:59.263792 | orchestrator | changed: [testbed-node-1] 2025-05-28 16:57:59.264755 | orchestrator | changed: [testbed-node-2] 2025-05-28 16:57:59.265661 | orchestrator | 2025-05-28 16:57:59.266810 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-05-28 16:57:59.266965 | orchestrator | Wednesday 28 May 2025 16:57:59 +0000 (0:00:01.166) 0:03:42.220 ********* 2025-05-28 16:58:00.412394 | orchestrator | changed: [testbed-manager] 2025-05-28 16:58:00.413127 | orchestrator | changed: [testbed-node-3] 2025-05-28 16:58:00.413587 | orchestrator | changed: [testbed-node-4] 2025-05-28 16:58:00.414356 | orchestrator | changed: [testbed-node-5] 2025-05-28 16:58:00.415101 | orchestrator | changed: [testbed-node-0] 2025-05-28 16:58:00.415135 | orchestrator | changed: [testbed-node-1] 2025-05-28 16:58:00.415451 | orchestrator | changed: [testbed-node-2] 2025-05-28 16:58:00.417979 | orchestrator | 2025-05-28 16:58:00.418676 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-05-28 16:58:00.419469 | orchestrator | Wednesday 28 May 2025 16:58:00 +0000 (0:00:01.156) 0:03:43.377 ********* 2025-05-28 16:58:00.484401 | orchestrator | skipping: [testbed-manager] 2025-05-28 16:58:00.523359 | orchestrator | skipping: [testbed-node-3] 2025-05-28 16:58:00.575066 | orchestrator | skipping: [testbed-node-4] 2025-05-28 16:58:00.603750 | orchestrator | skipping: [testbed-node-5] 2025-05-28 16:58:00.634557 | orchestrator | skipping: [testbed-node-0] 2025-05-28 16:58:00.689476 | orchestrator | skipping: [testbed-node-1] 2025-05-28 16:58:00.692670 | orchestrator | skipping: [testbed-node-2] 2025-05-28 16:58:00.692705 | orchestrator | 2025-05-28 16:58:00.693156 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-05-28 16:58:00.694192 | orchestrator | Wednesday 28 May 2025 16:58:00 +0000 (0:00:00.277) 0:03:43.654 ********* 2025-05-28 16:58:01.456868 | orchestrator | ok: [testbed-manager] 2025-05-28 16:58:01.459998 | orchestrator | ok: [testbed-node-3] 2025-05-28 16:58:01.460035 | orchestrator | ok: [testbed-node-4] 2025-05-28 16:58:01.460729 | orchestrator | ok: [testbed-node-5] 2025-05-28 16:58:01.462384 | orchestrator | ok: [testbed-node-0] 2025-05-28 16:58:01.463934 | orchestrator | ok: [testbed-node-1] 2025-05-28 16:58:01.463969 | orchestrator | ok: [testbed-node-2] 2025-05-28 16:58:01.465476 | orchestrator | 2025-05-28 16:58:01.466497 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-05-28 16:58:01.467620 | orchestrator | Wednesday 28 May 2025 16:58:01 +0000 (0:00:00.766) 0:03:44.420 ********* 2025-05-28 16:58:01.882555 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 16:58:01.882788 | orchestrator | 2025-05-28 16:58:01.883856 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-05-28 16:58:01.884692 | orchestrator | Wednesday 28 May 2025 16:58:01 +0000 (0:00:00.426) 0:03:44.847 ********* 2025-05-28 16:58:09.949455 | orchestrator | ok: [testbed-manager] 2025-05-28 16:58:09.949814 | orchestrator | changed: [testbed-node-3] 2025-05-28 16:58:09.950800 | orchestrator | changed: [testbed-node-0] 2025-05-28 16:58:09.952472 | orchestrator | changed: [testbed-node-1] 2025-05-28 16:58:09.953022 | orchestrator | changed: [testbed-node-2] 2025-05-28 16:58:09.953665 | orchestrator | changed: [testbed-node-5] 2025-05-28 16:58:09.954566 | orchestrator | changed: [testbed-node-4] 2025-05-28 16:58:09.955092 | orchestrator | 2025-05-28 16:58:09.955860 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-05-28 16:58:09.956320 | orchestrator | Wednesday 28 May 2025 16:58:09 +0000 (0:00:08.064) 0:03:52.912 ********* 2025-05-28 16:58:11.172393 | orchestrator | ok: [testbed-manager] 2025-05-28 16:58:11.172577 | orchestrator | ok: [testbed-node-3] 2025-05-28 16:58:11.174252 | orchestrator | ok: [testbed-node-0] 2025-05-28 16:58:11.175710 | orchestrator | ok: [testbed-node-4] 2025-05-28 16:58:11.176640 | orchestrator | ok: [testbed-node-5] 2025-05-28 16:58:11.176719 | orchestrator | ok: [testbed-node-1] 2025-05-28 16:58:11.177509 | orchestrator | ok: [testbed-node-2] 2025-05-28 16:58:11.178802 | orchestrator | 2025-05-28 16:58:11.178841 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-05-28 16:58:11.179125 | orchestrator | Wednesday 28 May 2025 16:58:11 +0000 (0:00:01.223) 0:03:54.135 ********* 2025-05-28 16:58:12.162808 | orchestrator | ok: [testbed-manager] 2025-05-28 16:58:12.166488 | orchestrator | ok: [testbed-node-3] 2025-05-28 16:58:12.166590 | orchestrator | ok: [testbed-node-5] 2025-05-28 16:58:12.166604 | orchestrator | ok: [testbed-node-4] 2025-05-28 16:58:12.167067 | orchestrator | ok: [testbed-node-0] 2025-05-28 16:58:12.167190 | orchestrator | ok: [testbed-node-1] 2025-05-28 16:58:12.168101 | orchestrator | ok: [testbed-node-2] 2025-05-28 16:58:12.168716 | orchestrator | 2025-05-28 16:58:12.169467 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-05-28 16:58:12.170184 | orchestrator | Wednesday 28 May 2025 16:58:12 +0000 (0:00:00.990) 0:03:55.126 ********* 2025-05-28 16:58:12.679914 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 16:58:12.680174 | orchestrator | 2025-05-28 16:58:12.681428 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-05-28 16:58:12.682182 | orchestrator | Wednesday 28 May 2025 16:58:12 +0000 (0:00:00.519) 0:03:55.645 ********* 2025-05-28 16:58:21.866665 | orchestrator | changed: [testbed-manager] 2025-05-28 16:58:21.868708 | orchestrator | changed: [testbed-node-1] 2025-05-28 16:58:21.868756 | orchestrator | changed: [testbed-node-2] 2025-05-28 16:58:21.869720 | orchestrator | changed: [testbed-node-0] 2025-05-28 16:58:21.870903 | orchestrator | changed: [testbed-node-3] 2025-05-28 16:58:21.873107 | orchestrator | changed: [testbed-node-5] 2025-05-28 16:58:21.874700 | orchestrator | changed: [testbed-node-4] 2025-05-28 16:58:21.875195 | orchestrator | 2025-05-28 16:58:21.875443 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-05-28 16:58:21.875836 | orchestrator | Wednesday 28 May 2025 16:58:21 +0000 (0:00:09.179) 0:04:04.825 ********* 2025-05-28 16:58:22.485950 | orchestrator | changed: [testbed-manager] 2025-05-28 16:58:22.487130 | orchestrator | changed: [testbed-node-3] 2025-05-28 16:58:22.487802 | orchestrator | changed: [testbed-node-0] 2025-05-28 16:58:22.488941 | orchestrator | changed: [testbed-node-4] 2025-05-28 16:58:22.489455 | orchestrator | changed: [testbed-node-5] 2025-05-28 16:58:22.490446 | orchestrator | changed: [testbed-node-1] 2025-05-28 16:58:22.491277 | orchestrator | changed: [testbed-node-2] 2025-05-28 16:58:22.493294 | orchestrator | 2025-05-28 16:58:22.494211 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-05-28 16:58:22.495101 | orchestrator | Wednesday 28 May 2025 16:58:22 +0000 (0:00:00.626) 0:04:05.451 ********* 2025-05-28 16:58:23.646375 | orchestrator | changed: [testbed-manager] 2025-05-28 16:58:23.646691 | orchestrator | changed: [testbed-node-3] 2025-05-28 16:58:23.646828 | orchestrator | changed: [testbed-node-4] 2025-05-28 16:58:23.648282 | orchestrator | changed: [testbed-node-5] 2025-05-28 16:58:23.648797 | orchestrator | changed: [testbed-node-0] 2025-05-28 16:58:23.649594 | orchestrator | changed: [testbed-node-1] 2025-05-28 16:58:23.650521 | orchestrator | changed: [testbed-node-2] 2025-05-28 16:58:23.651247 | orchestrator | 2025-05-28 16:58:23.652086 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-05-28 16:58:23.652369 | orchestrator | Wednesday 28 May 2025 16:58:23 +0000 (0:00:01.159) 0:04:06.611 ********* 2025-05-28 16:58:24.682372 | orchestrator | changed: [testbed-manager] 2025-05-28 16:58:24.682500 | orchestrator | changed: [testbed-node-3] 2025-05-28 16:58:24.683099 | orchestrator | changed: [testbed-node-5] 2025-05-28 16:58:24.683507 | orchestrator | changed: [testbed-node-4] 2025-05-28 16:58:24.684543 | orchestrator | changed: [testbed-node-0] 2025-05-28 16:58:24.684952 | orchestrator | changed: [testbed-node-2] 2025-05-28 16:58:24.685646 | orchestrator | changed: [testbed-node-1] 2025-05-28 16:58:24.686433 | orchestrator | 2025-05-28 16:58:24.686838 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-05-28 16:58:24.687687 | orchestrator | Wednesday 28 May 2025 16:58:24 +0000 (0:00:01.031) 0:04:07.643 ********* 2025-05-28 16:58:24.829011 | orchestrator | ok: [testbed-manager] 2025-05-28 16:58:24.864615 | orchestrator | ok: [testbed-node-3] 2025-05-28 16:58:24.894603 | orchestrator | ok: [testbed-node-4] 2025-05-28 16:58:24.932118 | orchestrator | ok: [testbed-node-5] 2025-05-28 16:58:24.998333 | orchestrator | ok: [testbed-node-0] 2025-05-28 16:58:24.999957 | orchestrator | ok: [testbed-node-1] 2025-05-28 16:58:25.002129 | orchestrator | ok: [testbed-node-2] 2025-05-28 16:58:25.003676 | orchestrator | 2025-05-28 16:58:25.004810 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-05-28 16:58:25.005801 | orchestrator | Wednesday 28 May 2025 16:58:24 +0000 (0:00:00.318) 0:04:07.962 ********* 2025-05-28 16:58:25.111328 | orchestrator | ok: [testbed-manager] 2025-05-28 16:58:25.149470 | orchestrator | ok: [testbed-node-3] 2025-05-28 16:58:25.180547 | orchestrator | ok: [testbed-node-4] 2025-05-28 16:58:25.210764 | orchestrator | ok: [testbed-node-5] 2025-05-28 16:58:25.313131 | orchestrator | ok: [testbed-node-0] 2025-05-28 16:58:25.314207 | orchestrator | ok: [testbed-node-1] 2025-05-28 16:58:25.316063 | orchestrator | ok: [testbed-node-2] 2025-05-28 16:58:25.317700 | orchestrator | 2025-05-28 16:58:25.318605 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-05-28 16:58:25.320013 | orchestrator | Wednesday 28 May 2025 16:58:25 +0000 (0:00:00.314) 0:04:08.276 ********* 2025-05-28 16:58:25.417629 | orchestrator | ok: [testbed-manager] 2025-05-28 16:58:25.449592 | orchestrator | ok: [testbed-node-3] 2025-05-28 16:58:25.489766 | orchestrator | ok: [testbed-node-4] 2025-05-28 16:58:25.526925 | orchestrator | ok: [testbed-node-5] 2025-05-28 16:58:25.602378 | orchestrator | ok: [testbed-node-0] 2025-05-28 16:58:25.607665 | orchestrator | ok: [testbed-node-1] 2025-05-28 16:58:25.608983 | orchestrator | ok: [testbed-node-2] 2025-05-28 16:58:25.610086 | orchestrator | 2025-05-28 16:58:25.611201 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-05-28 16:58:25.611993 | orchestrator | Wednesday 28 May 2025 16:58:25 +0000 (0:00:00.288) 0:04:08.565 ********* 2025-05-28 16:58:31.324144 | orchestrator | ok: [testbed-manager] 2025-05-28 16:58:31.324361 | orchestrator | ok: [testbed-node-0] 2025-05-28 16:58:31.324380 | orchestrator | ok: [testbed-node-5] 2025-05-28 16:58:31.325296 | orchestrator | ok: [testbed-node-1] 2025-05-28 16:58:31.325791 | orchestrator | ok: [testbed-node-3] 2025-05-28 16:58:31.326433 | orchestrator | ok: [testbed-node-2] 2025-05-28 16:58:31.327071 | orchestrator | ok: [testbed-node-4] 2025-05-28 16:58:31.327853 | orchestrator | 2025-05-28 16:58:31.328248 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-05-28 16:58:31.328833 | orchestrator | Wednesday 28 May 2025 16:58:31 +0000 (0:00:05.723) 0:04:14.288 ********* 2025-05-28 16:58:31.708285 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 16:58:31.709262 | orchestrator | 2025-05-28 16:58:31.711457 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-05-28 16:58:31.712084 | orchestrator | Wednesday 28 May 2025 16:58:31 +0000 (0:00:00.382) 0:04:14.671 ********* 2025-05-28 16:58:31.796151 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-05-28 16:58:31.796358 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-05-28 16:58:31.797852 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-05-28 16:58:31.801732 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-05-28 16:58:31.830429 | orchestrator | skipping: [testbed-manager] 2025-05-28 16:58:31.874585 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-05-28 16:58:31.874671 | orchestrator | skipping: [testbed-node-3] 2025-05-28 16:58:31.874730 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-05-28 16:58:31.875074 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-05-28 16:58:31.923418 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-05-28 16:58:31.923617 | orchestrator | skipping: [testbed-node-4] 2025-05-28 16:58:31.924043 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-05-28 16:58:31.924349 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-05-28 16:58:31.956941 | orchestrator | skipping: [testbed-node-5] 2025-05-28 16:58:31.957634 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-05-28 16:58:32.041057 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-05-28 16:58:32.041257 | orchestrator | skipping: [testbed-node-0] 2025-05-28 16:58:32.041876 | orchestrator | skipping: [testbed-node-1] 2025-05-28 16:58:32.042285 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-05-28 16:58:32.042778 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-05-28 16:58:32.043973 | orchestrator | skipping: [testbed-node-2] 2025-05-28 16:58:32.044266 | orchestrator | 2025-05-28 16:58:32.044830 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-05-28 16:58:32.045303 | orchestrator | Wednesday 28 May 2025 16:58:32 +0000 (0:00:00.336) 0:04:15.007 ********* 2025-05-28 16:58:32.511818 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 16:58:32.512065 | orchestrator | 2025-05-28 16:58:32.512830 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-05-28 16:58:32.513199 | orchestrator | Wednesday 28 May 2025 16:58:32 +0000 (0:00:00.469) 0:04:15.477 ********* 2025-05-28 16:58:32.607088 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-05-28 16:58:32.608035 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-05-28 16:58:32.648665 | orchestrator | skipping: [testbed-manager] 2025-05-28 16:58:32.648896 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-05-28 16:58:32.686382 | orchestrator | skipping: [testbed-node-3] 2025-05-28 16:58:32.687699 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-05-28 16:58:32.725156 | orchestrator | skipping: [testbed-node-4] 2025-05-28 16:58:32.726156 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-05-28 16:58:32.758739 | orchestrator | skipping: [testbed-node-5] 2025-05-28 16:58:32.832191 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-05-28 16:58:32.832398 | orchestrator | skipping: [testbed-node-0] 2025-05-28 16:58:32.832415 | orchestrator | skipping: [testbed-node-1] 2025-05-28 16:58:32.832504 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-05-28 16:58:32.832995 | orchestrator | skipping: [testbed-node-2] 2025-05-28 16:58:32.833621 | orchestrator | 2025-05-28 16:58:32.834088 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-05-28 16:58:32.835028 | orchestrator | Wednesday 28 May 2025 16:58:32 +0000 (0:00:00.312) 0:04:15.789 ********* 2025-05-28 16:58:33.368828 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 16:58:33.370858 | orchestrator | 2025-05-28 16:58:33.371465 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-05-28 16:58:33.372381 | orchestrator | Wednesday 28 May 2025 16:58:33 +0000 (0:00:00.543) 0:04:16.333 ********* 2025-05-28 16:59:08.851289 | orchestrator | changed: [testbed-manager] 2025-05-28 16:59:08.851416 | orchestrator | changed: [testbed-node-3] 2025-05-28 16:59:08.852752 | orchestrator | changed: [testbed-node-2] 2025-05-28 16:59:08.854622 | orchestrator | changed: [testbed-node-1] 2025-05-28 16:59:08.855326 | orchestrator | changed: [testbed-node-0] 2025-05-28 16:59:08.855949 | orchestrator | changed: [testbed-node-5] 2025-05-28 16:59:08.856435 | orchestrator | changed: [testbed-node-4] 2025-05-28 16:59:08.857727 | orchestrator | 2025-05-28 16:59:08.857749 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-05-28 16:59:08.858279 | orchestrator | Wednesday 28 May 2025 16:59:08 +0000 (0:00:35.477) 0:04:51.810 ********* 2025-05-28 16:59:16.636426 | orchestrator | changed: [testbed-manager] 2025-05-28 16:59:16.636657 | orchestrator | changed: [testbed-node-3] 2025-05-28 16:59:16.636674 | orchestrator | changed: [testbed-node-0] 2025-05-28 16:59:16.636685 | orchestrator | changed: [testbed-node-1] 2025-05-28 16:59:16.636696 | orchestrator | changed: [testbed-node-5] 2025-05-28 16:59:16.636781 | orchestrator | changed: [testbed-node-2] 2025-05-28 16:59:16.636797 | orchestrator | changed: [testbed-node-4] 2025-05-28 16:59:16.636884 | orchestrator | 2025-05-28 16:59:16.640561 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-05-28 16:59:16.640822 | orchestrator | Wednesday 28 May 2025 16:59:16 +0000 (0:00:07.785) 0:04:59.596 ********* 2025-05-28 16:59:24.590120 | orchestrator | changed: [testbed-manager] 2025-05-28 16:59:24.591327 | orchestrator | changed: [testbed-node-2] 2025-05-28 16:59:24.592794 | orchestrator | changed: [testbed-node-3] 2025-05-28 16:59:24.594181 | orchestrator | changed: [testbed-node-0] 2025-05-28 16:59:24.594635 | orchestrator | changed: [testbed-node-1] 2025-05-28 16:59:24.595098 | orchestrator | changed: [testbed-node-5] 2025-05-28 16:59:24.595764 | orchestrator | changed: [testbed-node-4] 2025-05-28 16:59:24.596621 | orchestrator | 2025-05-28 16:59:24.596867 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-05-28 16:59:24.597471 | orchestrator | Wednesday 28 May 2025 16:59:24 +0000 (0:00:07.958) 0:05:07.554 ********* 2025-05-28 16:59:26.260002 | orchestrator | ok: [testbed-manager] 2025-05-28 16:59:26.260190 | orchestrator | ok: [testbed-node-3] 2025-05-28 16:59:26.261036 | orchestrator | ok: [testbed-node-2] 2025-05-28 16:59:26.261897 | orchestrator | ok: [testbed-node-0] 2025-05-28 16:59:26.263442 | orchestrator | ok: [testbed-node-1] 2025-05-28 16:59:26.263925 | orchestrator | ok: [testbed-node-5] 2025-05-28 16:59:26.265076 | orchestrator | ok: [testbed-node-4] 2025-05-28 16:59:26.265439 | orchestrator | 2025-05-28 16:59:26.266165 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-05-28 16:59:26.266679 | orchestrator | Wednesday 28 May 2025 16:59:26 +0000 (0:00:01.667) 0:05:09.222 ********* 2025-05-28 16:59:31.933211 | orchestrator | changed: [testbed-manager] 2025-05-28 16:59:31.933350 | orchestrator | changed: [testbed-node-3] 2025-05-28 16:59:31.935217 | orchestrator | changed: [testbed-node-1] 2025-05-28 16:59:31.937140 | orchestrator | changed: [testbed-node-0] 2025-05-28 16:59:31.937768 | orchestrator | changed: [testbed-node-2] 2025-05-28 16:59:31.939648 | orchestrator | changed: [testbed-node-5] 2025-05-28 16:59:31.941532 | orchestrator | changed: [testbed-node-4] 2025-05-28 16:59:31.941564 | orchestrator | 2025-05-28 16:59:31.941919 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-05-28 16:59:31.942887 | orchestrator | Wednesday 28 May 2025 16:59:31 +0000 (0:00:05.674) 0:05:14.897 ********* 2025-05-28 16:59:32.384360 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 16:59:32.385495 | orchestrator | 2025-05-28 16:59:32.386294 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-05-28 16:59:32.387258 | orchestrator | Wednesday 28 May 2025 16:59:32 +0000 (0:00:00.451) 0:05:15.348 ********* 2025-05-28 16:59:33.072407 | orchestrator | changed: [testbed-manager] 2025-05-28 16:59:33.072598 | orchestrator | changed: [testbed-node-3] 2025-05-28 16:59:33.076822 | orchestrator | changed: [testbed-node-4] 2025-05-28 16:59:33.077565 | orchestrator | changed: [testbed-node-0] 2025-05-28 16:59:33.078100 | orchestrator | changed: [testbed-node-5] 2025-05-28 16:59:33.079153 | orchestrator | changed: [testbed-node-1] 2025-05-28 16:59:33.079871 | orchestrator | changed: [testbed-node-2] 2025-05-28 16:59:33.080510 | orchestrator | 2025-05-28 16:59:33.081450 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-05-28 16:59:33.082728 | orchestrator | Wednesday 28 May 2025 16:59:33 +0000 (0:00:00.687) 0:05:16.036 ********* 2025-05-28 16:59:34.351199 | orchestrator | ok: [testbed-node-3] 2025-05-28 16:59:34.351823 | orchestrator | ok: [testbed-node-2] 2025-05-28 16:59:34.352740 | orchestrator | ok: [testbed-manager] 2025-05-28 16:59:34.353693 | orchestrator | ok: [testbed-node-5] 2025-05-28 16:59:34.354261 | orchestrator | ok: [testbed-node-0] 2025-05-28 16:59:34.354898 | orchestrator | ok: [testbed-node-4] 2025-05-28 16:59:34.355442 | orchestrator | ok: [testbed-node-1] 2025-05-28 16:59:34.355929 | orchestrator | 2025-05-28 16:59:34.356567 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-05-28 16:59:34.357050 | orchestrator | Wednesday 28 May 2025 16:59:34 +0000 (0:00:01.278) 0:05:17.314 ********* 2025-05-28 16:59:35.105760 | orchestrator | changed: [testbed-node-4] 2025-05-28 16:59:35.106491 | orchestrator | changed: [testbed-node-3] 2025-05-28 16:59:35.108077 | orchestrator | changed: [testbed-node-0] 2025-05-28 16:59:35.108592 | orchestrator | changed: [testbed-node-5] 2025-05-28 16:59:35.109630 | orchestrator | changed: [testbed-node-1] 2025-05-28 16:59:35.110337 | orchestrator | changed: [testbed-node-2] 2025-05-28 16:59:35.111127 | orchestrator | changed: [testbed-manager] 2025-05-28 16:59:35.111716 | orchestrator | 2025-05-28 16:59:35.112819 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-05-28 16:59:35.113743 | orchestrator | Wednesday 28 May 2025 16:59:35 +0000 (0:00:00.755) 0:05:18.069 ********* 2025-05-28 16:59:35.184179 | orchestrator | skipping: [testbed-manager] 2025-05-28 16:59:35.218760 | orchestrator | skipping: [testbed-node-3] 2025-05-28 16:59:35.249833 | orchestrator | skipping: [testbed-node-4] 2025-05-28 16:59:35.334247 | orchestrator | skipping: [testbed-node-5] 2025-05-28 16:59:35.388448 | orchestrator | skipping: [testbed-node-0] 2025-05-28 16:59:35.388648 | orchestrator | skipping: [testbed-node-1] 2025-05-28 16:59:35.389655 | orchestrator | skipping: [testbed-node-2] 2025-05-28 16:59:35.390305 | orchestrator | 2025-05-28 16:59:35.390964 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-05-28 16:59:35.391698 | orchestrator | Wednesday 28 May 2025 16:59:35 +0000 (0:00:00.283) 0:05:18.353 ********* 2025-05-28 16:59:35.468955 | orchestrator | skipping: [testbed-manager] 2025-05-28 16:59:35.502693 | orchestrator | skipping: [testbed-node-3] 2025-05-28 16:59:35.535913 | orchestrator | skipping: [testbed-node-4] 2025-05-28 16:59:35.571935 | orchestrator | skipping: [testbed-node-5] 2025-05-28 16:59:35.605142 | orchestrator | skipping: [testbed-node-0] 2025-05-28 16:59:35.790261 | orchestrator | skipping: [testbed-node-1] 2025-05-28 16:59:35.791154 | orchestrator | skipping: [testbed-node-2] 2025-05-28 16:59:35.791883 | orchestrator | 2025-05-28 16:59:35.792863 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-05-28 16:59:35.793647 | orchestrator | Wednesday 28 May 2025 16:59:35 +0000 (0:00:00.401) 0:05:18.755 ********* 2025-05-28 16:59:35.905409 | orchestrator | ok: [testbed-manager] 2025-05-28 16:59:35.943305 | orchestrator | ok: [testbed-node-3] 2025-05-28 16:59:35.976609 | orchestrator | ok: [testbed-node-4] 2025-05-28 16:59:36.020549 | orchestrator | ok: [testbed-node-5] 2025-05-28 16:59:36.092916 | orchestrator | ok: [testbed-node-0] 2025-05-28 16:59:36.093321 | orchestrator | ok: [testbed-node-1] 2025-05-28 16:59:36.093895 | orchestrator | ok: [testbed-node-2] 2025-05-28 16:59:36.094681 | orchestrator | 2025-05-28 16:59:36.095491 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-05-28 16:59:36.096896 | orchestrator | Wednesday 28 May 2025 16:59:36 +0000 (0:00:00.304) 0:05:19.059 ********* 2025-05-28 16:59:36.172167 | orchestrator | skipping: [testbed-manager] 2025-05-28 16:59:36.256598 | orchestrator | skipping: [testbed-node-3] 2025-05-28 16:59:36.290924 | orchestrator | skipping: [testbed-node-4] 2025-05-28 16:59:36.326131 | orchestrator | skipping: [testbed-node-5] 2025-05-28 16:59:36.382941 | orchestrator | skipping: [testbed-node-0] 2025-05-28 16:59:36.384443 | orchestrator | skipping: [testbed-node-1] 2025-05-28 16:59:36.385104 | orchestrator | skipping: [testbed-node-2] 2025-05-28 16:59:36.387682 | orchestrator | 2025-05-28 16:59:36.387724 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-05-28 16:59:36.387746 | orchestrator | Wednesday 28 May 2025 16:59:36 +0000 (0:00:00.289) 0:05:19.349 ********* 2025-05-28 16:59:36.491536 | orchestrator | ok: [testbed-manager] 2025-05-28 16:59:36.526186 | orchestrator | ok: [testbed-node-3] 2025-05-28 16:59:36.560344 | orchestrator | ok: [testbed-node-4] 2025-05-28 16:59:36.599970 | orchestrator | ok: [testbed-node-5] 2025-05-28 16:59:36.675482 | orchestrator | ok: [testbed-node-0] 2025-05-28 16:59:36.676478 | orchestrator | ok: [testbed-node-1] 2025-05-28 16:59:36.677422 | orchestrator | ok: [testbed-node-2] 2025-05-28 16:59:36.679047 | orchestrator | 2025-05-28 16:59:36.679937 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-05-28 16:59:36.681100 | orchestrator | Wednesday 28 May 2025 16:59:36 +0000 (0:00:00.292) 0:05:19.642 ********* 2025-05-28 16:59:36.908946 | orchestrator | ok: [testbed-manager] =>  2025-05-28 16:59:36.909467 | orchestrator |  docker_version: 5:27.5.1 2025-05-28 16:59:36.944530 | orchestrator | ok: [testbed-node-3] =>  2025-05-28 16:59:36.945023 | orchestrator |  docker_version: 5:27.5.1 2025-05-28 16:59:36.981449 | orchestrator | ok: [testbed-node-4] =>  2025-05-28 16:59:36.981697 | orchestrator |  docker_version: 5:27.5.1 2025-05-28 16:59:37.017820 | orchestrator | ok: [testbed-node-5] =>  2025-05-28 16:59:37.018099 | orchestrator |  docker_version: 5:27.5.1 2025-05-28 16:59:37.076069 | orchestrator | ok: [testbed-node-0] =>  2025-05-28 16:59:37.077123 | orchestrator |  docker_version: 5:27.5.1 2025-05-28 16:59:37.078733 | orchestrator | ok: [testbed-node-1] =>  2025-05-28 16:59:37.079476 | orchestrator |  docker_version: 5:27.5.1 2025-05-28 16:59:37.080495 | orchestrator | ok: [testbed-node-2] =>  2025-05-28 16:59:37.081611 | orchestrator |  docker_version: 5:27.5.1 2025-05-28 16:59:37.082428 | orchestrator | 2025-05-28 16:59:37.083570 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-05-28 16:59:37.084397 | orchestrator | Wednesday 28 May 2025 16:59:37 +0000 (0:00:00.399) 0:05:20.041 ********* 2025-05-28 16:59:37.178302 | orchestrator | ok: [testbed-manager] =>  2025-05-28 16:59:37.178643 | orchestrator |  docker_cli_version: 5:27.5.1 2025-05-28 16:59:37.209154 | orchestrator | ok: [testbed-node-3] =>  2025-05-28 16:59:37.209986 | orchestrator |  docker_cli_version: 5:27.5.1 2025-05-28 16:59:37.242194 | orchestrator | ok: [testbed-node-4] =>  2025-05-28 16:59:37.243398 | orchestrator |  docker_cli_version: 5:27.5.1 2025-05-28 16:59:37.271547 | orchestrator | ok: [testbed-node-5] =>  2025-05-28 16:59:37.272271 | orchestrator |  docker_cli_version: 5:27.5.1 2025-05-28 16:59:37.342635 | orchestrator | ok: [testbed-node-0] =>  2025-05-28 16:59:37.342761 | orchestrator |  docker_cli_version: 5:27.5.1 2025-05-28 16:59:37.342865 | orchestrator | ok: [testbed-node-1] =>  2025-05-28 16:59:37.343580 | orchestrator |  docker_cli_version: 5:27.5.1 2025-05-28 16:59:37.343773 | orchestrator | ok: [testbed-node-2] =>  2025-05-28 16:59:37.343978 | orchestrator |  docker_cli_version: 5:27.5.1 2025-05-28 16:59:37.344311 | orchestrator | 2025-05-28 16:59:37.345518 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-05-28 16:59:37.345542 | orchestrator | Wednesday 28 May 2025 16:59:37 +0000 (0:00:00.266) 0:05:20.308 ********* 2025-05-28 16:59:37.425410 | orchestrator | skipping: [testbed-manager] 2025-05-28 16:59:37.460908 | orchestrator | skipping: [testbed-node-3] 2025-05-28 16:59:37.494554 | orchestrator | skipping: [testbed-node-4] 2025-05-28 16:59:37.525895 | orchestrator | skipping: [testbed-node-5] 2025-05-28 16:59:37.560567 | orchestrator | skipping: [testbed-node-0] 2025-05-28 16:59:37.619093 | orchestrator | skipping: [testbed-node-1] 2025-05-28 16:59:37.620190 | orchestrator | skipping: [testbed-node-2] 2025-05-28 16:59:37.621383 | orchestrator | 2025-05-28 16:59:37.622368 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-05-28 16:59:37.623859 | orchestrator | Wednesday 28 May 2025 16:59:37 +0000 (0:00:00.276) 0:05:20.585 ********* 2025-05-28 16:59:37.715730 | orchestrator | skipping: [testbed-manager] 2025-05-28 16:59:37.750381 | orchestrator | skipping: [testbed-node-3] 2025-05-28 16:59:37.778877 | orchestrator | skipping: [testbed-node-4] 2025-05-28 16:59:37.826381 | orchestrator | skipping: [testbed-node-5] 2025-05-28 16:59:37.885009 | orchestrator | skipping: [testbed-node-0] 2025-05-28 16:59:37.888402 | orchestrator | skipping: [testbed-node-1] 2025-05-28 16:59:37.889645 | orchestrator | skipping: [testbed-node-2] 2025-05-28 16:59:37.890551 | orchestrator | 2025-05-28 16:59:37.891845 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-05-28 16:59:37.893091 | orchestrator | Wednesday 28 May 2025 16:59:37 +0000 (0:00:00.265) 0:05:20.850 ********* 2025-05-28 16:59:38.299329 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 16:59:38.299429 | orchestrator | 2025-05-28 16:59:38.303081 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-05-28 16:59:38.303127 | orchestrator | Wednesday 28 May 2025 16:59:38 +0000 (0:00:00.412) 0:05:21.263 ********* 2025-05-28 16:59:39.090984 | orchestrator | ok: [testbed-node-3] 2025-05-28 16:59:39.091169 | orchestrator | ok: [testbed-node-5] 2025-05-28 16:59:39.092931 | orchestrator | ok: [testbed-node-2] 2025-05-28 16:59:39.092958 | orchestrator | ok: [testbed-manager] 2025-05-28 16:59:39.093778 | orchestrator | ok: [testbed-node-1] 2025-05-28 16:59:39.093945 | orchestrator | ok: [testbed-node-4] 2025-05-28 16:59:39.094671 | orchestrator | ok: [testbed-node-0] 2025-05-28 16:59:39.095149 | orchestrator | 2025-05-28 16:59:39.095608 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-05-28 16:59:39.096343 | orchestrator | Wednesday 28 May 2025 16:59:39 +0000 (0:00:00.791) 0:05:22.054 ********* 2025-05-28 16:59:41.700559 | orchestrator | ok: [testbed-node-1] 2025-05-28 16:59:41.701124 | orchestrator | ok: [testbed-node-3] 2025-05-28 16:59:41.704256 | orchestrator | ok: [testbed-node-0] 2025-05-28 16:59:41.706967 | orchestrator | ok: [testbed-node-2] 2025-05-28 16:59:41.707798 | orchestrator | ok: [testbed-node-5] 2025-05-28 16:59:41.708974 | orchestrator | ok: [testbed-node-4] 2025-05-28 16:59:41.709876 | orchestrator | ok: [testbed-manager] 2025-05-28 16:59:41.711157 | orchestrator | 2025-05-28 16:59:41.711665 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-05-28 16:59:41.712358 | orchestrator | Wednesday 28 May 2025 16:59:41 +0000 (0:00:02.610) 0:05:24.665 ********* 2025-05-28 16:59:41.771372 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-05-28 16:59:41.772206 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-05-28 16:59:41.851608 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-05-28 16:59:41.851827 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-05-28 16:59:41.852691 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-05-28 16:59:41.856796 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-05-28 16:59:42.080183 | orchestrator | skipping: [testbed-manager] 2025-05-28 16:59:42.080444 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-05-28 16:59:42.081003 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-05-28 16:59:42.081495 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-05-28 16:59:42.148740 | orchestrator | skipping: [testbed-node-3] 2025-05-28 16:59:42.149380 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-05-28 16:59:42.150155 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-05-28 16:59:42.222934 | orchestrator | skipping: [testbed-node-4] 2025-05-28 16:59:42.223563 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-05-28 16:59:42.224989 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-05-28 16:59:42.225741 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-05-28 16:59:42.305746 | orchestrator | skipping: [testbed-node-5] 2025-05-28 16:59:42.308718 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-05-28 16:59:42.308748 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-05-28 16:59:42.308760 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-05-28 16:59:42.446952 | orchestrator | skipping: [testbed-node-0] 2025-05-28 16:59:42.448275 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-05-28 16:59:42.449829 | orchestrator | skipping: [testbed-node-1] 2025-05-28 16:59:42.452460 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-05-28 16:59:42.452487 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-05-28 16:59:42.453522 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-05-28 16:59:42.455118 | orchestrator | skipping: [testbed-node-2] 2025-05-28 16:59:42.456580 | orchestrator | 2025-05-28 16:59:42.457347 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-05-28 16:59:42.457880 | orchestrator | Wednesday 28 May 2025 16:59:42 +0000 (0:00:00.744) 0:05:25.409 ********* 2025-05-28 16:59:49.068093 | orchestrator | ok: [testbed-manager] 2025-05-28 16:59:49.068294 | orchestrator | changed: [testbed-node-3] 2025-05-28 16:59:49.068315 | orchestrator | changed: [testbed-node-2] 2025-05-28 16:59:49.069189 | orchestrator | changed: [testbed-node-1] 2025-05-28 16:59:49.070358 | orchestrator | changed: [testbed-node-0] 2025-05-28 16:59:49.073022 | orchestrator | changed: [testbed-node-5] 2025-05-28 16:59:49.077194 | orchestrator | changed: [testbed-node-4] 2025-05-28 16:59:49.079005 | orchestrator | 2025-05-28 16:59:49.079910 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-05-28 16:59:49.080904 | orchestrator | Wednesday 28 May 2025 16:59:49 +0000 (0:00:06.619) 0:05:32.029 ********* 2025-05-28 16:59:50.123733 | orchestrator | ok: [testbed-manager] 2025-05-28 16:59:50.123976 | orchestrator | changed: [testbed-node-4] 2025-05-28 16:59:50.124879 | orchestrator | changed: [testbed-node-5] 2025-05-28 16:59:50.126098 | orchestrator | changed: [testbed-node-0] 2025-05-28 16:59:50.129184 | orchestrator | changed: [testbed-node-3] 2025-05-28 16:59:50.129655 | orchestrator | changed: [testbed-node-1] 2025-05-28 16:59:50.130524 | orchestrator | changed: [testbed-node-2] 2025-05-28 16:59:50.130884 | orchestrator | 2025-05-28 16:59:50.131482 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-05-28 16:59:50.132477 | orchestrator | Wednesday 28 May 2025 16:59:50 +0000 (0:00:01.060) 0:05:33.089 ********* 2025-05-28 16:59:58.315730 | orchestrator | ok: [testbed-manager] 2025-05-28 16:59:58.316327 | orchestrator | changed: [testbed-node-3] 2025-05-28 16:59:58.317507 | orchestrator | changed: [testbed-node-2] 2025-05-28 16:59:58.319085 | orchestrator | changed: [testbed-node-5] 2025-05-28 16:59:58.320354 | orchestrator | changed: [testbed-node-1] 2025-05-28 16:59:58.320834 | orchestrator | changed: [testbed-node-0] 2025-05-28 16:59:58.322454 | orchestrator | changed: [testbed-node-4] 2025-05-28 16:59:58.324004 | orchestrator | 2025-05-28 16:59:58.325180 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-05-28 16:59:58.325936 | orchestrator | Wednesday 28 May 2025 16:59:58 +0000 (0:00:08.186) 0:05:41.276 ********* 2025-05-28 17:00:01.456085 | orchestrator | changed: [testbed-manager] 2025-05-28 17:00:01.456217 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:00:01.456554 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:00:01.457795 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:00:01.459463 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:00:01.459752 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:00:01.460602 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:00:01.461132 | orchestrator | 2025-05-28 17:00:01.463564 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-05-28 17:00:01.463590 | orchestrator | Wednesday 28 May 2025 17:00:01 +0000 (0:00:03.143) 0:05:44.419 ********* 2025-05-28 17:00:03.074207 | orchestrator | ok: [testbed-manager] 2025-05-28 17:00:03.074800 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:00:03.076478 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:00:03.077710 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:00:03.078489 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:00:03.079590 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:00:03.080371 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:00:03.081087 | orchestrator | 2025-05-28 17:00:03.081938 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-05-28 17:00:03.083633 | orchestrator | Wednesday 28 May 2025 17:00:03 +0000 (0:00:01.618) 0:05:46.038 ********* 2025-05-28 17:00:04.418421 | orchestrator | ok: [testbed-manager] 2025-05-28 17:00:04.418578 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:00:04.418982 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:00:04.419669 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:00:04.422902 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:00:04.422996 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:00:04.423431 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:00:04.424900 | orchestrator | 2025-05-28 17:00:04.425241 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-05-28 17:00:04.426343 | orchestrator | Wednesday 28 May 2025 17:00:04 +0000 (0:00:01.343) 0:05:47.381 ********* 2025-05-28 17:00:04.626773 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:00:04.694653 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:00:04.773287 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:00:04.841059 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:00:05.027725 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:00:05.027819 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:00:05.028944 | orchestrator | changed: [testbed-manager] 2025-05-28 17:00:05.029992 | orchestrator | 2025-05-28 17:00:05.031081 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-05-28 17:00:05.031696 | orchestrator | Wednesday 28 May 2025 17:00:05 +0000 (0:00:00.610) 0:05:47.992 ********* 2025-05-28 17:00:15.630675 | orchestrator | ok: [testbed-manager] 2025-05-28 17:00:15.630930 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:00:15.633743 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:00:15.634668 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:00:15.637953 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:00:15.638598 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:00:15.639767 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:00:15.640345 | orchestrator | 2025-05-28 17:00:15.640926 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-05-28 17:00:15.641890 | orchestrator | Wednesday 28 May 2025 17:00:15 +0000 (0:00:10.597) 0:05:58.589 ********* 2025-05-28 17:00:16.825742 | orchestrator | changed: [testbed-manager] 2025-05-28 17:00:16.828533 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:00:16.828567 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:00:16.829748 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:00:16.830359 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:00:16.831130 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:00:16.831570 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:00:16.832462 | orchestrator | 2025-05-28 17:00:16.834350 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-05-28 17:00:16.835512 | orchestrator | Wednesday 28 May 2025 17:00:16 +0000 (0:00:01.198) 0:05:59.788 ********* 2025-05-28 17:00:26.156726 | orchestrator | ok: [testbed-manager] 2025-05-28 17:00:26.156867 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:00:26.157635 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:00:26.159540 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:00:26.159949 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:00:26.161346 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:00:26.161826 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:00:26.162981 | orchestrator | 2025-05-28 17:00:26.163556 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-05-28 17:00:26.164883 | orchestrator | Wednesday 28 May 2025 17:00:26 +0000 (0:00:09.328) 0:06:09.117 ********* 2025-05-28 17:00:37.294078 | orchestrator | ok: [testbed-manager] 2025-05-28 17:00:37.294188 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:00:37.295203 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:00:37.296738 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:00:37.298707 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:00:37.299294 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:00:37.299916 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:00:37.301299 | orchestrator | 2025-05-28 17:00:37.301322 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-05-28 17:00:37.301649 | orchestrator | Wednesday 28 May 2025 17:00:37 +0000 (0:00:11.138) 0:06:20.256 ********* 2025-05-28 17:00:37.738992 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-05-28 17:00:37.741533 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-05-28 17:00:38.594449 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-05-28 17:00:38.595194 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-05-28 17:00:38.596565 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-05-28 17:00:38.597170 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-05-28 17:00:38.598163 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-05-28 17:00:38.598888 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-05-28 17:00:38.599854 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-05-28 17:00:38.600315 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-05-28 17:00:38.600952 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-05-28 17:00:38.601773 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-05-28 17:00:38.602465 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-05-28 17:00:38.602876 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-05-28 17:00:38.603620 | orchestrator | 2025-05-28 17:00:38.604167 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-05-28 17:00:38.605577 | orchestrator | Wednesday 28 May 2025 17:00:38 +0000 (0:00:01.301) 0:06:21.557 ********* 2025-05-28 17:00:38.743587 | orchestrator | skipping: [testbed-manager] 2025-05-28 17:00:38.804137 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:00:38.868137 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:00:38.939338 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:00:39.004350 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:00:39.135665 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:00:39.136210 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:00:39.137725 | orchestrator | 2025-05-28 17:00:39.138496 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-05-28 17:00:39.139777 | orchestrator | Wednesday 28 May 2025 17:00:39 +0000 (0:00:00.542) 0:06:22.099 ********* 2025-05-28 17:00:42.891059 | orchestrator | ok: [testbed-manager] 2025-05-28 17:00:42.891320 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:00:42.891345 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:00:42.891358 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:00:42.892306 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:00:42.892913 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:00:42.893816 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:00:42.894458 | orchestrator | 2025-05-28 17:00:42.895883 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-05-28 17:00:42.896567 | orchestrator | Wednesday 28 May 2025 17:00:42 +0000 (0:00:03.745) 0:06:25.845 ********* 2025-05-28 17:00:43.015139 | orchestrator | skipping: [testbed-manager] 2025-05-28 17:00:43.086960 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:00:43.151841 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:00:43.216117 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:00:43.284286 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:00:43.388610 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:00:43.388972 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:00:43.390146 | orchestrator | 2025-05-28 17:00:43.391352 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-05-28 17:00:43.394331 | orchestrator | Wednesday 28 May 2025 17:00:43 +0000 (0:00:00.507) 0:06:26.352 ********* 2025-05-28 17:00:43.466313 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-05-28 17:00:43.467511 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-05-28 17:00:43.544123 | orchestrator | skipping: [testbed-manager] 2025-05-28 17:00:43.544805 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-05-28 17:00:43.548786 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-05-28 17:00:43.613039 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:00:43.613775 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-05-28 17:00:43.614837 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-05-28 17:00:43.679034 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:00:43.679809 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-05-28 17:00:43.681448 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-05-28 17:00:43.768783 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:00:43.769602 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-05-28 17:00:43.770772 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-05-28 17:00:43.838124 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:00:43.838369 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-05-28 17:00:43.839186 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-05-28 17:00:43.963720 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:00:43.964305 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-05-28 17:00:43.965548 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-05-28 17:00:43.966290 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:00:43.966589 | orchestrator | 2025-05-28 17:00:43.966984 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-05-28 17:00:43.967837 | orchestrator | Wednesday 28 May 2025 17:00:43 +0000 (0:00:00.573) 0:06:26.926 ********* 2025-05-28 17:00:44.101048 | orchestrator | skipping: [testbed-manager] 2025-05-28 17:00:44.196579 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:00:44.265090 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:00:44.356256 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:00:44.422708 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:00:44.541610 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:00:44.541818 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:00:44.542106 | orchestrator | 2025-05-28 17:00:44.542443 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-05-28 17:00:44.543682 | orchestrator | Wednesday 28 May 2025 17:00:44 +0000 (0:00:00.581) 0:06:27.507 ********* 2025-05-28 17:00:44.671324 | orchestrator | skipping: [testbed-manager] 2025-05-28 17:00:44.737986 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:00:44.800569 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:00:44.866055 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:00:44.936578 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:00:45.039418 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:00:45.039882 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:00:45.041148 | orchestrator | 2025-05-28 17:00:45.041965 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-05-28 17:00:45.045020 | orchestrator | Wednesday 28 May 2025 17:00:45 +0000 (0:00:00.495) 0:06:28.003 ********* 2025-05-28 17:00:45.187088 | orchestrator | skipping: [testbed-manager] 2025-05-28 17:00:45.510930 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:00:45.586964 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:00:45.657407 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:00:45.778074 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:00:45.778317 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:00:45.778533 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:00:45.779669 | orchestrator | 2025-05-28 17:00:45.780097 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-05-28 17:00:45.783175 | orchestrator | Wednesday 28 May 2025 17:00:45 +0000 (0:00:00.738) 0:06:28.742 ********* 2025-05-28 17:00:47.482689 | orchestrator | ok: [testbed-manager] 2025-05-28 17:00:47.482829 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:00:47.482949 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:00:47.483012 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:00:47.483429 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:00:47.483680 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:00:47.483922 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:00:47.484155 | orchestrator | 2025-05-28 17:00:47.484519 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-05-28 17:00:47.484789 | orchestrator | Wednesday 28 May 2025 17:00:47 +0000 (0:00:01.704) 0:06:30.446 ********* 2025-05-28 17:00:48.322616 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:00:48.322760 | orchestrator | 2025-05-28 17:00:48.322840 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-05-28 17:00:48.328614 | orchestrator | Wednesday 28 May 2025 17:00:48 +0000 (0:00:00.842) 0:06:31.289 ********* 2025-05-28 17:00:48.728844 | orchestrator | ok: [testbed-manager] 2025-05-28 17:00:49.339102 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:00:49.339221 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:00:49.339427 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:00:49.340313 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:00:49.340726 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:00:49.341758 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:00:49.345627 | orchestrator | 2025-05-28 17:00:49.346085 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-05-28 17:00:49.346873 | orchestrator | Wednesday 28 May 2025 17:00:49 +0000 (0:00:01.014) 0:06:32.303 ********* 2025-05-28 17:00:49.754996 | orchestrator | ok: [testbed-manager] 2025-05-28 17:00:50.193404 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:00:50.193529 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:00:50.193807 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:00:50.194152 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:00:50.195510 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:00:50.195763 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:00:50.196422 | orchestrator | 2025-05-28 17:00:50.196826 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-05-28 17:00:50.200084 | orchestrator | Wednesday 28 May 2025 17:00:50 +0000 (0:00:00.850) 0:06:33.153 ********* 2025-05-28 17:00:51.582136 | orchestrator | ok: [testbed-manager] 2025-05-28 17:00:51.582999 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:00:51.583220 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:00:51.583516 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:00:51.583774 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:00:51.584041 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:00:51.584560 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:00:51.585467 | orchestrator | 2025-05-28 17:00:51.585781 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-05-28 17:00:51.585884 | orchestrator | Wednesday 28 May 2025 17:00:51 +0000 (0:00:01.390) 0:06:34.544 ********* 2025-05-28 17:00:51.731887 | orchestrator | skipping: [testbed-manager] 2025-05-28 17:00:52.935650 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:00:52.936061 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:00:52.937843 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:00:52.938593 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:00:52.939878 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:00:52.940621 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:00:52.941803 | orchestrator | 2025-05-28 17:00:52.942535 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-05-28 17:00:52.943216 | orchestrator | Wednesday 28 May 2025 17:00:52 +0000 (0:00:01.355) 0:06:35.900 ********* 2025-05-28 17:00:54.257059 | orchestrator | ok: [testbed-manager] 2025-05-28 17:00:54.258178 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:00:54.258508 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:00:54.262452 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:00:54.262786 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:00:54.263595 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:00:54.264608 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:00:54.265994 | orchestrator | 2025-05-28 17:00:54.266782 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-05-28 17:00:54.267771 | orchestrator | Wednesday 28 May 2025 17:00:54 +0000 (0:00:01.318) 0:06:37.219 ********* 2025-05-28 17:00:55.925834 | orchestrator | changed: [testbed-manager] 2025-05-28 17:00:55.926066 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:00:55.926694 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:00:55.928314 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:00:55.929063 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:00:55.929922 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:00:55.930806 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:00:55.931460 | orchestrator | 2025-05-28 17:00:55.932168 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-05-28 17:00:55.932732 | orchestrator | Wednesday 28 May 2025 17:00:55 +0000 (0:00:01.667) 0:06:38.887 ********* 2025-05-28 17:00:56.803017 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:00:56.803520 | orchestrator | 2025-05-28 17:00:56.804607 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-05-28 17:00:56.805327 | orchestrator | Wednesday 28 May 2025 17:00:56 +0000 (0:00:00.879) 0:06:39.767 ********* 2025-05-28 17:00:58.120930 | orchestrator | ok: [testbed-manager] 2025-05-28 17:00:58.121027 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:00:58.121425 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:00:58.122726 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:00:58.123326 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:00:58.123976 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:00:58.125572 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:00:58.126096 | orchestrator | 2025-05-28 17:00:58.126971 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-05-28 17:00:58.127531 | orchestrator | Wednesday 28 May 2025 17:00:58 +0000 (0:00:01.316) 0:06:41.083 ********* 2025-05-28 17:00:59.252868 | orchestrator | ok: [testbed-manager] 2025-05-28 17:00:59.254078 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:00:59.254818 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:00:59.255613 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:00:59.256504 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:00:59.256985 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:00:59.257702 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:00:59.258508 | orchestrator | 2025-05-28 17:00:59.259029 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-05-28 17:00:59.259703 | orchestrator | Wednesday 28 May 2025 17:00:59 +0000 (0:00:01.134) 0:06:42.217 ********* 2025-05-28 17:01:00.566682 | orchestrator | ok: [testbed-manager] 2025-05-28 17:01:00.566820 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:01:00.568008 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:01:00.568733 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:01:00.568770 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:01:00.569356 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:01:00.571473 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:01:00.573056 | orchestrator | 2025-05-28 17:01:00.574357 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-05-28 17:01:00.575106 | orchestrator | Wednesday 28 May 2025 17:01:00 +0000 (0:00:01.312) 0:06:43.530 ********* 2025-05-28 17:01:01.773788 | orchestrator | ok: [testbed-manager] 2025-05-28 17:01:01.775905 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:01:01.777185 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:01:01.777387 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:01:01.778511 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:01:01.778720 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:01:01.779519 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:01:01.780704 | orchestrator | 2025-05-28 17:01:01.781552 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-05-28 17:01:01.782433 | orchestrator | Wednesday 28 May 2025 17:01:01 +0000 (0:00:01.208) 0:06:44.738 ********* 2025-05-28 17:01:03.154835 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:01:03.155374 | orchestrator | 2025-05-28 17:01:03.158746 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-28 17:01:03.158774 | orchestrator | Wednesday 28 May 2025 17:01:02 +0000 (0:00:00.915) 0:06:45.653 ********* 2025-05-28 17:01:03.158782 | orchestrator | 2025-05-28 17:01:03.158932 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-28 17:01:03.159863 | orchestrator | Wednesday 28 May 2025 17:01:02 +0000 (0:00:00.038) 0:06:45.692 ********* 2025-05-28 17:01:03.161205 | orchestrator | 2025-05-28 17:01:03.161655 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-28 17:01:03.162353 | orchestrator | Wednesday 28 May 2025 17:01:02 +0000 (0:00:00.037) 0:06:45.730 ********* 2025-05-28 17:01:03.163479 | orchestrator | 2025-05-28 17:01:03.163724 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-28 17:01:03.164434 | orchestrator | Wednesday 28 May 2025 17:01:02 +0000 (0:00:00.045) 0:06:45.775 ********* 2025-05-28 17:01:03.165141 | orchestrator | 2025-05-28 17:01:03.165863 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-28 17:01:03.166282 | orchestrator | Wednesday 28 May 2025 17:01:02 +0000 (0:00:00.038) 0:06:45.814 ********* 2025-05-28 17:01:03.166741 | orchestrator | 2025-05-28 17:01:03.167179 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-28 17:01:03.167659 | orchestrator | Wednesday 28 May 2025 17:01:02 +0000 (0:00:00.040) 0:06:45.854 ********* 2025-05-28 17:01:03.168099 | orchestrator | 2025-05-28 17:01:03.168605 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-28 17:01:03.169046 | orchestrator | Wednesday 28 May 2025 17:01:03 +0000 (0:00:00.223) 0:06:46.078 ********* 2025-05-28 17:01:03.169619 | orchestrator | 2025-05-28 17:01:03.170096 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-05-28 17:01:03.170486 | orchestrator | Wednesday 28 May 2025 17:01:03 +0000 (0:00:00.039) 0:06:46.117 ********* 2025-05-28 17:01:04.261979 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:01:04.262647 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:01:04.264578 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:01:04.266691 | orchestrator | 2025-05-28 17:01:04.267293 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-05-28 17:01:04.268423 | orchestrator | Wednesday 28 May 2025 17:01:04 +0000 (0:00:01.106) 0:06:47.224 ********* 2025-05-28 17:01:05.672441 | orchestrator | changed: [testbed-manager] 2025-05-28 17:01:05.672823 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:01:05.673635 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:01:05.674382 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:01:05.675628 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:01:05.676016 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:01:05.679868 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:01:05.679918 | orchestrator | 2025-05-28 17:01:05.679929 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-05-28 17:01:05.679940 | orchestrator | Wednesday 28 May 2025 17:01:05 +0000 (0:00:01.410) 0:06:48.634 ********* 2025-05-28 17:01:06.890209 | orchestrator | changed: [testbed-manager] 2025-05-28 17:01:06.890479 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:01:06.894417 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:01:06.894468 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:01:06.894479 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:01:06.894532 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:01:06.895332 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:01:06.896012 | orchestrator | 2025-05-28 17:01:06.896834 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-05-28 17:01:06.897721 | orchestrator | Wednesday 28 May 2025 17:01:06 +0000 (0:00:01.219) 0:06:49.853 ********* 2025-05-28 17:01:07.025289 | orchestrator | skipping: [testbed-manager] 2025-05-28 17:01:09.242279 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:01:09.243772 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:01:09.243803 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:01:09.244157 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:01:09.245018 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:01:09.245436 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:01:09.246972 | orchestrator | 2025-05-28 17:01:09.247077 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-05-28 17:01:09.247617 | orchestrator | Wednesday 28 May 2025 17:01:09 +0000 (0:00:02.349) 0:06:52.203 ********* 2025-05-28 17:01:09.356354 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:01:09.356541 | orchestrator | 2025-05-28 17:01:09.356953 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-05-28 17:01:09.357554 | orchestrator | Wednesday 28 May 2025 17:01:09 +0000 (0:00:00.118) 0:06:52.322 ********* 2025-05-28 17:01:10.591656 | orchestrator | ok: [testbed-manager] 2025-05-28 17:01:10.591843 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:01:10.592705 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:01:10.593515 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:01:10.594416 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:01:10.595756 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:01:10.596178 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:01:10.597163 | orchestrator | 2025-05-28 17:01:10.597827 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-05-28 17:01:10.598560 | orchestrator | Wednesday 28 May 2025 17:01:10 +0000 (0:00:01.232) 0:06:53.554 ********* 2025-05-28 17:01:10.727975 | orchestrator | skipping: [testbed-manager] 2025-05-28 17:01:10.791194 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:01:10.856470 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:01:10.924311 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:01:10.989167 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:01:11.129890 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:01:11.133275 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:01:11.134401 | orchestrator | 2025-05-28 17:01:11.135533 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-05-28 17:01:11.135998 | orchestrator | Wednesday 28 May 2025 17:01:11 +0000 (0:00:00.537) 0:06:54.092 ********* 2025-05-28 17:01:12.006400 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:01:12.006978 | orchestrator | 2025-05-28 17:01:12.008864 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-05-28 17:01:12.009394 | orchestrator | Wednesday 28 May 2025 17:01:11 +0000 (0:00:00.878) 0:06:54.971 ********* 2025-05-28 17:01:12.962828 | orchestrator | ok: [testbed-manager] 2025-05-28 17:01:12.963652 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:01:12.965195 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:01:12.967023 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:01:12.967984 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:01:12.968721 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:01:12.969718 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:01:12.970600 | orchestrator | 2025-05-28 17:01:12.971809 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-05-28 17:01:12.974591 | orchestrator | Wednesday 28 May 2025 17:01:12 +0000 (0:00:00.952) 0:06:55.924 ********* 2025-05-28 17:01:15.702388 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-05-28 17:01:15.702622 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-05-28 17:01:15.702947 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-05-28 17:01:15.706830 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-05-28 17:01:15.706873 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-05-28 17:01:15.706893 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-05-28 17:01:15.706912 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-05-28 17:01:15.707080 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-05-28 17:01:15.708060 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-05-28 17:01:15.708692 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-05-28 17:01:15.709127 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-05-28 17:01:15.710350 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-05-28 17:01:15.710644 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-05-28 17:01:15.711720 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-05-28 17:01:15.712183 | orchestrator | 2025-05-28 17:01:15.713301 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-05-28 17:01:15.713808 | orchestrator | Wednesday 28 May 2025 17:01:15 +0000 (0:00:02.741) 0:06:58.665 ********* 2025-05-28 17:01:15.839477 | orchestrator | skipping: [testbed-manager] 2025-05-28 17:01:15.904422 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:01:15.991910 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:01:16.070539 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:01:16.135847 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:01:16.222422 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:01:16.223149 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:01:16.226711 | orchestrator | 2025-05-28 17:01:16.226744 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-05-28 17:01:16.226930 | orchestrator | Wednesday 28 May 2025 17:01:16 +0000 (0:00:00.520) 0:06:59.185 ********* 2025-05-28 17:01:17.027688 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:01:17.028347 | orchestrator | 2025-05-28 17:01:17.031815 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-05-28 17:01:17.031838 | orchestrator | Wednesday 28 May 2025 17:01:17 +0000 (0:00:00.806) 0:06:59.992 ********* 2025-05-28 17:01:17.500465 | orchestrator | ok: [testbed-manager] 2025-05-28 17:01:17.577190 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:01:18.114130 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:01:18.115151 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:01:18.115941 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:01:18.117387 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:01:18.118546 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:01:18.119777 | orchestrator | 2025-05-28 17:01:18.120896 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-05-28 17:01:18.122077 | orchestrator | Wednesday 28 May 2025 17:01:18 +0000 (0:00:01.082) 0:07:01.074 ********* 2025-05-28 17:01:18.524802 | orchestrator | ok: [testbed-manager] 2025-05-28 17:01:18.599277 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:01:18.970322 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:01:18.971917 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:01:18.972907 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:01:18.974650 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:01:18.977603 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:01:18.977776 | orchestrator | 2025-05-28 17:01:18.977832 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-05-28 17:01:18.977846 | orchestrator | Wednesday 28 May 2025 17:01:18 +0000 (0:00:00.859) 0:07:01.934 ********* 2025-05-28 17:01:19.114610 | orchestrator | skipping: [testbed-manager] 2025-05-28 17:01:19.187309 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:01:19.251424 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:01:19.320752 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:01:19.390841 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:01:19.478474 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:01:19.480729 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:01:19.482574 | orchestrator | 2025-05-28 17:01:19.483368 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-05-28 17:01:19.484464 | orchestrator | Wednesday 28 May 2025 17:01:19 +0000 (0:00:00.507) 0:07:02.442 ********* 2025-05-28 17:01:20.830087 | orchestrator | ok: [testbed-manager] 2025-05-28 17:01:20.833030 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:01:20.833691 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:01:20.834484 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:01:20.835148 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:01:20.837330 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:01:20.838005 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:01:20.840058 | orchestrator | 2025-05-28 17:01:20.840080 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-05-28 17:01:20.840911 | orchestrator | Wednesday 28 May 2025 17:01:20 +0000 (0:00:01.350) 0:07:03.793 ********* 2025-05-28 17:01:20.965459 | orchestrator | skipping: [testbed-manager] 2025-05-28 17:01:21.029158 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:01:21.109278 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:01:21.174120 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:01:21.240457 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:01:21.507022 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:01:21.507352 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:01:21.509137 | orchestrator | 2025-05-28 17:01:21.510012 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-05-28 17:01:21.510986 | orchestrator | Wednesday 28 May 2025 17:01:21 +0000 (0:00:00.677) 0:07:04.470 ********* 2025-05-28 17:01:29.551029 | orchestrator | ok: [testbed-manager] 2025-05-28 17:01:29.551170 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:01:29.551792 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:01:29.552456 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:01:29.553531 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:01:29.555644 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:01:29.556185 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:01:29.556723 | orchestrator | 2025-05-28 17:01:29.557151 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-05-28 17:01:29.558122 | orchestrator | Wednesday 28 May 2025 17:01:29 +0000 (0:00:08.041) 0:07:12.512 ********* 2025-05-28 17:01:30.885515 | orchestrator | ok: [testbed-manager] 2025-05-28 17:01:30.886917 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:01:30.887747 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:01:30.888640 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:01:30.889418 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:01:30.890327 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:01:30.891178 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:01:30.891792 | orchestrator | 2025-05-28 17:01:30.892418 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-05-28 17:01:30.893038 | orchestrator | Wednesday 28 May 2025 17:01:30 +0000 (0:00:01.333) 0:07:13.846 ********* 2025-05-28 17:01:32.652555 | orchestrator | ok: [testbed-manager] 2025-05-28 17:01:32.653237 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:01:32.654469 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:01:32.656033 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:01:32.657568 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:01:32.658591 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:01:32.659537 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:01:32.659985 | orchestrator | 2025-05-28 17:01:32.661082 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-05-28 17:01:32.661348 | orchestrator | Wednesday 28 May 2025 17:01:32 +0000 (0:00:01.768) 0:07:15.615 ********* 2025-05-28 17:01:34.542794 | orchestrator | ok: [testbed-manager] 2025-05-28 17:01:34.544461 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:01:34.544875 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:01:34.546007 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:01:34.546991 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:01:34.547881 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:01:34.548510 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:01:34.549450 | orchestrator | 2025-05-28 17:01:34.550309 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-05-28 17:01:34.550754 | orchestrator | Wednesday 28 May 2025 17:01:34 +0000 (0:00:01.890) 0:07:17.505 ********* 2025-05-28 17:01:34.974568 | orchestrator | ok: [testbed-manager] 2025-05-28 17:01:35.401976 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:01:35.402231 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:01:35.403738 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:01:35.404661 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:01:35.405506 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:01:35.406943 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:01:35.407786 | orchestrator | 2025-05-28 17:01:35.408615 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-05-28 17:01:35.409211 | orchestrator | Wednesday 28 May 2025 17:01:35 +0000 (0:00:00.862) 0:07:18.368 ********* 2025-05-28 17:01:35.527637 | orchestrator | skipping: [testbed-manager] 2025-05-28 17:01:35.599092 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:01:35.661715 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:01:35.728330 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:01:35.799872 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:01:36.199287 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:01:36.200112 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:01:36.201597 | orchestrator | 2025-05-28 17:01:36.204933 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-05-28 17:01:36.204958 | orchestrator | Wednesday 28 May 2025 17:01:36 +0000 (0:00:00.795) 0:07:19.163 ********* 2025-05-28 17:01:36.338936 | orchestrator | skipping: [testbed-manager] 2025-05-28 17:01:36.405511 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:01:36.480892 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:01:36.543184 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:01:36.608153 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:01:36.717639 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:01:36.718317 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:01:36.719562 | orchestrator | 2025-05-28 17:01:36.723774 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-05-28 17:01:36.723802 | orchestrator | Wednesday 28 May 2025 17:01:36 +0000 (0:00:00.519) 0:07:19.683 ********* 2025-05-28 17:01:36.845300 | orchestrator | ok: [testbed-manager] 2025-05-28 17:01:36.915487 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:01:37.165441 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:01:37.233556 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:01:37.295110 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:01:37.403687 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:01:37.404605 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:01:37.405394 | orchestrator | 2025-05-28 17:01:37.410493 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-05-28 17:01:37.410582 | orchestrator | Wednesday 28 May 2025 17:01:37 +0000 (0:00:00.685) 0:07:20.368 ********* 2025-05-28 17:01:37.533189 | orchestrator | ok: [testbed-manager] 2025-05-28 17:01:37.601690 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:01:37.664161 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:01:37.728150 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:01:37.803230 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:01:37.902516 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:01:37.902901 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:01:37.904179 | orchestrator | 2025-05-28 17:01:37.904812 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-05-28 17:01:37.908721 | orchestrator | Wednesday 28 May 2025 17:01:37 +0000 (0:00:00.498) 0:07:20.867 ********* 2025-05-28 17:01:38.038838 | orchestrator | ok: [testbed-manager] 2025-05-28 17:01:38.102940 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:01:38.175945 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:01:38.254805 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:01:38.318648 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:01:38.419389 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:01:38.419621 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:01:38.420814 | orchestrator | 2025-05-28 17:01:38.421356 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-05-28 17:01:38.421937 | orchestrator | Wednesday 28 May 2025 17:01:38 +0000 (0:00:00.516) 0:07:21.383 ********* 2025-05-28 17:01:43.758434 | orchestrator | ok: [testbed-manager] 2025-05-28 17:01:43.759181 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:01:43.759212 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:01:43.760450 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:01:43.761193 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:01:43.763912 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:01:43.763936 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:01:43.763948 | orchestrator | 2025-05-28 17:01:43.763961 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-05-28 17:01:43.763974 | orchestrator | Wednesday 28 May 2025 17:01:43 +0000 (0:00:05.338) 0:07:26.722 ********* 2025-05-28 17:01:43.908664 | orchestrator | skipping: [testbed-manager] 2025-05-28 17:01:44.001369 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:01:44.088843 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:01:44.155243 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:01:44.423935 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:01:44.551094 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:01:44.552467 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:01:44.556867 | orchestrator | 2025-05-28 17:01:44.558098 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-05-28 17:01:44.558676 | orchestrator | Wednesday 28 May 2025 17:01:44 +0000 (0:00:00.792) 0:07:27.514 ********* 2025-05-28 17:01:45.375684 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:01:45.377760 | orchestrator | 2025-05-28 17:01:45.379977 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-05-28 17:01:45.384063 | orchestrator | Wednesday 28 May 2025 17:01:45 +0000 (0:00:00.821) 0:07:28.336 ********* 2025-05-28 17:01:47.166895 | orchestrator | ok: [testbed-manager] 2025-05-28 17:01:47.167000 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:01:47.167736 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:01:47.170207 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:01:47.171137 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:01:47.172362 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:01:47.174412 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:01:47.174462 | orchestrator | 2025-05-28 17:01:47.175354 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-05-28 17:01:47.177581 | orchestrator | Wednesday 28 May 2025 17:01:47 +0000 (0:00:01.792) 0:07:30.128 ********* 2025-05-28 17:01:48.450366 | orchestrator | ok: [testbed-manager] 2025-05-28 17:01:48.450561 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:01:48.451028 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:01:48.451882 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:01:48.453042 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:01:48.453680 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:01:48.454696 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:01:48.455439 | orchestrator | 2025-05-28 17:01:48.456115 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-05-28 17:01:48.456600 | orchestrator | Wednesday 28 May 2025 17:01:48 +0000 (0:00:01.285) 0:07:31.413 ********* 2025-05-28 17:01:49.008219 | orchestrator | ok: [testbed-manager] 2025-05-28 17:01:49.079206 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:01:49.521022 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:01:49.521525 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:01:49.522918 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:01:49.523153 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:01:49.524610 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:01:49.525530 | orchestrator | 2025-05-28 17:01:49.526491 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-05-28 17:01:49.527099 | orchestrator | Wednesday 28 May 2025 17:01:49 +0000 (0:00:01.068) 0:07:32.482 ********* 2025-05-28 17:01:51.192204 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-28 17:01:51.193076 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-28 17:01:51.193876 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-28 17:01:51.195369 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-28 17:01:51.196491 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-28 17:01:51.196962 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-28 17:01:51.197882 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-28 17:01:51.198709 | orchestrator | 2025-05-28 17:01:51.199646 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-05-28 17:01:51.200250 | orchestrator | Wednesday 28 May 2025 17:01:51 +0000 (0:00:01.671) 0:07:34.153 ********* 2025-05-28 17:01:52.008102 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:01:52.009420 | orchestrator | 2025-05-28 17:01:52.009987 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-05-28 17:01:52.010906 | orchestrator | Wednesday 28 May 2025 17:01:52 +0000 (0:00:00.820) 0:07:34.974 ********* 2025-05-28 17:02:01.428146 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:02:01.428407 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:02:01.429409 | orchestrator | changed: [testbed-manager] 2025-05-28 17:02:01.429921 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:02:01.431883 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:02:01.432787 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:02:01.433446 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:02:01.434188 | orchestrator | 2025-05-28 17:02:01.434583 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-05-28 17:02:01.435471 | orchestrator | Wednesday 28 May 2025 17:02:01 +0000 (0:00:09.416) 0:07:44.390 ********* 2025-05-28 17:02:03.128915 | orchestrator | ok: [testbed-manager] 2025-05-28 17:02:03.129094 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:02:03.130006 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:02:03.131155 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:02:03.131470 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:02:03.132394 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:02:03.134703 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:02:03.134727 | orchestrator | 2025-05-28 17:02:03.134740 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-05-28 17:02:03.134753 | orchestrator | Wednesday 28 May 2025 17:02:03 +0000 (0:00:01.697) 0:07:46.088 ********* 2025-05-28 17:02:04.404706 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:02:04.406312 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:02:04.407872 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:02:04.409374 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:02:04.410496 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:02:04.411472 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:02:04.412684 | orchestrator | 2025-05-28 17:02:04.413615 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-05-28 17:02:04.414759 | orchestrator | Wednesday 28 May 2025 17:02:04 +0000 (0:00:01.281) 0:07:47.369 ********* 2025-05-28 17:02:05.836580 | orchestrator | changed: [testbed-manager] 2025-05-28 17:02:05.836697 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:02:05.836712 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:02:05.837936 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:02:05.839985 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:02:05.841198 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:02:05.842522 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:02:05.843270 | orchestrator | 2025-05-28 17:02:05.843927 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-05-28 17:02:05.846133 | orchestrator | 2025-05-28 17:02:05.846932 | orchestrator | TASK [Include hardening role] ************************************************** 2025-05-28 17:02:05.847579 | orchestrator | Wednesday 28 May 2025 17:02:05 +0000 (0:00:01.428) 0:07:48.798 ********* 2025-05-28 17:02:05.961883 | orchestrator | skipping: [testbed-manager] 2025-05-28 17:02:06.023956 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:02:06.086980 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:02:06.153512 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:02:06.226644 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:02:06.360441 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:02:06.360872 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:02:06.361515 | orchestrator | 2025-05-28 17:02:06.362239 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-05-28 17:02:06.365508 | orchestrator | 2025-05-28 17:02:06.365542 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-05-28 17:02:06.365554 | orchestrator | Wednesday 28 May 2025 17:02:06 +0000 (0:00:00.525) 0:07:49.324 ********* 2025-05-28 17:02:07.689111 | orchestrator | changed: [testbed-manager] 2025-05-28 17:02:07.689379 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:02:07.689959 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:02:07.691969 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:02:07.692695 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:02:07.693324 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:02:07.694333 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:02:07.695082 | orchestrator | 2025-05-28 17:02:07.695940 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-05-28 17:02:07.696649 | orchestrator | Wednesday 28 May 2025 17:02:07 +0000 (0:00:01.329) 0:07:50.653 ********* 2025-05-28 17:02:09.279915 | orchestrator | ok: [testbed-manager] 2025-05-28 17:02:09.280709 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:02:09.284880 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:02:09.285795 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:02:09.288749 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:02:09.288791 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:02:09.291222 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:02:09.291633 | orchestrator | 2025-05-28 17:02:09.295421 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-05-28 17:02:09.295700 | orchestrator | Wednesday 28 May 2025 17:02:09 +0000 (0:00:01.589) 0:07:52.243 ********* 2025-05-28 17:02:09.426427 | orchestrator | skipping: [testbed-manager] 2025-05-28 17:02:09.493080 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:02:09.566976 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:02:09.627786 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:02:09.702305 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:02:10.095378 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:02:10.095546 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:02:10.096558 | orchestrator | 2025-05-28 17:02:10.097795 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-05-28 17:02:10.098802 | orchestrator | Wednesday 28 May 2025 17:02:10 +0000 (0:00:00.816) 0:07:53.059 ********* 2025-05-28 17:02:11.359670 | orchestrator | changed: [testbed-manager] 2025-05-28 17:02:11.360573 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:02:11.361533 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:02:11.362708 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:02:11.363471 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:02:11.364505 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:02:11.365304 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:02:11.366102 | orchestrator | 2025-05-28 17:02:11.368046 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-05-28 17:02:11.368496 | orchestrator | 2025-05-28 17:02:11.369497 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-05-28 17:02:11.370335 | orchestrator | Wednesday 28 May 2025 17:02:11 +0000 (0:00:01.263) 0:07:54.323 ********* 2025-05-28 17:02:12.345601 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:02:12.345948 | orchestrator | 2025-05-28 17:02:12.347051 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-05-28 17:02:12.348253 | orchestrator | Wednesday 28 May 2025 17:02:12 +0000 (0:00:00.985) 0:07:55.308 ********* 2025-05-28 17:02:13.180695 | orchestrator | ok: [testbed-manager] 2025-05-28 17:02:13.183079 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:02:13.184444 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:02:13.185389 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:02:13.185988 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:02:13.186892 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:02:13.187697 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:02:13.188487 | orchestrator | 2025-05-28 17:02:13.188844 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-05-28 17:02:13.189969 | orchestrator | Wednesday 28 May 2025 17:02:13 +0000 (0:00:00.833) 0:07:56.142 ********* 2025-05-28 17:02:14.311979 | orchestrator | changed: [testbed-manager] 2025-05-28 17:02:14.312185 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:02:14.313138 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:02:14.314275 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:02:14.314656 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:02:14.315628 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:02:14.316159 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:02:14.316453 | orchestrator | 2025-05-28 17:02:14.317412 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-05-28 17:02:14.317576 | orchestrator | Wednesday 28 May 2025 17:02:14 +0000 (0:00:01.133) 0:07:57.275 ********* 2025-05-28 17:02:15.346482 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:02:15.350383 | orchestrator | 2025-05-28 17:02:15.350423 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-05-28 17:02:15.350434 | orchestrator | Wednesday 28 May 2025 17:02:15 +0000 (0:00:01.033) 0:07:58.309 ********* 2025-05-28 17:02:15.806629 | orchestrator | ok: [testbed-manager] 2025-05-28 17:02:16.221733 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:02:16.222880 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:02:16.224531 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:02:16.224994 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:02:16.226784 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:02:16.227695 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:02:16.228663 | orchestrator | 2025-05-28 17:02:16.229191 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-05-28 17:02:16.230214 | orchestrator | Wednesday 28 May 2025 17:02:16 +0000 (0:00:00.875) 0:07:59.184 ********* 2025-05-28 17:02:16.708349 | orchestrator | changed: [testbed-manager] 2025-05-28 17:02:17.397753 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:02:17.397944 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:02:17.398910 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:02:17.400767 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:02:17.401538 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:02:17.402162 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:02:17.404226 | orchestrator | 2025-05-28 17:02:17.405211 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 17:02:17.405253 | orchestrator | 2025-05-28 17:02:17 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-28 17:02:17.405267 | orchestrator | 2025-05-28 17:02:17 | INFO  | Please wait and do not abort execution. 2025-05-28 17:02:17.405502 | orchestrator | testbed-manager : ok=162  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-05-28 17:02:17.406337 | orchestrator | testbed-node-0 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-28 17:02:17.406801 | orchestrator | testbed-node-1 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-28 17:02:17.407063 | orchestrator | testbed-node-2 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-28 17:02:17.407465 | orchestrator | testbed-node-3 : ok=169  changed=63  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-05-28 17:02:17.408962 | orchestrator | testbed-node-4 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-28 17:02:17.408981 | orchestrator | testbed-node-5 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-28 17:02:17.408993 | orchestrator | 2025-05-28 17:02:17.410814 | orchestrator | 2025-05-28 17:02:17.410834 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 17:02:17.411355 | orchestrator | Wednesday 28 May 2025 17:02:17 +0000 (0:00:01.172) 0:08:00.357 ********* 2025-05-28 17:02:17.412743 | orchestrator | =============================================================================== 2025-05-28 17:02:17.412788 | orchestrator | osism.commons.packages : Install required packages --------------------- 77.51s 2025-05-28 17:02:17.413101 | orchestrator | osism.commons.packages : Download required packages -------------------- 37.80s 2025-05-28 17:02:17.413500 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 35.48s 2025-05-28 17:02:17.417870 | orchestrator | osism.commons.repository : Update package cache ------------------------ 14.38s 2025-05-28 17:02:17.417892 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 12.24s 2025-05-28 17:02:17.417903 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.64s 2025-05-28 17:02:17.417914 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.14s 2025-05-28 17:02:17.417925 | orchestrator | osism.services.docker : Install containerd package --------------------- 10.60s 2025-05-28 17:02:17.417936 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.42s 2025-05-28 17:02:17.417947 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.33s 2025-05-28 17:02:17.417991 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 9.18s 2025-05-28 17:02:17.418002 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.19s 2025-05-28 17:02:17.419017 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.06s 2025-05-28 17:02:17.419039 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 8.04s 2025-05-28 17:02:17.419050 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.96s 2025-05-28 17:02:17.419118 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.79s 2025-05-28 17:02:17.420028 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 6.80s 2025-05-28 17:02:17.420070 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.62s 2025-05-28 17:02:17.420153 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.80s 2025-05-28 17:02:17.420501 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.72s 2025-05-28 17:02:18.135493 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-05-28 17:02:18.135609 | orchestrator | + osism apply network 2025-05-28 17:02:20.303423 | orchestrator | Registering Redlock._acquired_script 2025-05-28 17:02:20.303551 | orchestrator | Registering Redlock._extend_script 2025-05-28 17:02:20.303562 | orchestrator | Registering Redlock._release_script 2025-05-28 17:02:20.369102 | orchestrator | 2025-05-28 17:02:20 | INFO  | Task 2996d9a1-fc77-418e-bf63-8f6981fb3acb (network) was prepared for execution. 2025-05-28 17:02:20.369219 | orchestrator | 2025-05-28 17:02:20 | INFO  | It takes a moment until task 2996d9a1-fc77-418e-bf63-8f6981fb3acb (network) has been started and output is visible here. 2025-05-28 17:02:24.695161 | orchestrator | 2025-05-28 17:02:24.698734 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-05-28 17:02:24.702146 | orchestrator | 2025-05-28 17:02:24.702724 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-05-28 17:02:24.703740 | orchestrator | Wednesday 28 May 2025 17:02:24 +0000 (0:00:00.285) 0:00:00.285 ********* 2025-05-28 17:02:24.840092 | orchestrator | ok: [testbed-manager] 2025-05-28 17:02:24.920963 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:02:24.995021 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:02:25.069551 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:02:25.256461 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:02:25.411842 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:02:25.419944 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:02:25.419991 | orchestrator | 2025-05-28 17:02:25.420005 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-05-28 17:02:25.420018 | orchestrator | Wednesday 28 May 2025 17:02:25 +0000 (0:00:00.716) 0:00:01.001 ********* 2025-05-28 17:02:26.631490 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 17:02:26.631720 | orchestrator | 2025-05-28 17:02:26.632808 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-05-28 17:02:26.633731 | orchestrator | Wednesday 28 May 2025 17:02:26 +0000 (0:00:01.217) 0:00:02.220 ********* 2025-05-28 17:02:28.718933 | orchestrator | ok: [testbed-manager] 2025-05-28 17:02:28.719813 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:02:28.724908 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:02:28.725586 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:02:28.727505 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:02:28.727885 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:02:28.728850 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:02:28.729765 | orchestrator | 2025-05-28 17:02:28.730583 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-05-28 17:02:28.731888 | orchestrator | Wednesday 28 May 2025 17:02:28 +0000 (0:00:02.090) 0:00:04.310 ********* 2025-05-28 17:02:30.460711 | orchestrator | ok: [testbed-manager] 2025-05-28 17:02:30.461556 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:02:30.462769 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:02:30.466251 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:02:30.466315 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:02:30.466329 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:02:30.466340 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:02:30.466794 | orchestrator | 2025-05-28 17:02:30.467694 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-05-28 17:02:30.467948 | orchestrator | Wednesday 28 May 2025 17:02:30 +0000 (0:00:01.739) 0:00:06.050 ********* 2025-05-28 17:02:30.997247 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-05-28 17:02:30.997441 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-05-28 17:02:31.449894 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-05-28 17:02:31.450007 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-05-28 17:02:31.451208 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-05-28 17:02:31.451628 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-05-28 17:02:31.452391 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-05-28 17:02:31.452453 | orchestrator | 2025-05-28 17:02:31.457012 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-05-28 17:02:31.457034 | orchestrator | Wednesday 28 May 2025 17:02:31 +0000 (0:00:00.993) 0:00:07.043 ********* 2025-05-28 17:02:34.918129 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-28 17:02:34.918355 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-28 17:02:34.919082 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-28 17:02:34.919696 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-28 17:02:34.920475 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-28 17:02:34.920854 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-28 17:02:34.921541 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-28 17:02:34.923126 | orchestrator | 2025-05-28 17:02:34.923659 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-05-28 17:02:34.923972 | orchestrator | Wednesday 28 May 2025 17:02:34 +0000 (0:00:03.466) 0:00:10.510 ********* 2025-05-28 17:02:36.608022 | orchestrator | changed: [testbed-manager] 2025-05-28 17:02:36.608688 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:02:36.612546 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:02:36.613087 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:02:36.613744 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:02:36.614355 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:02:36.615984 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:02:36.616792 | orchestrator | 2025-05-28 17:02:36.619959 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-05-28 17:02:36.620695 | orchestrator | Wednesday 28 May 2025 17:02:36 +0000 (0:00:01.686) 0:00:12.197 ********* 2025-05-28 17:02:38.279680 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-28 17:02:38.279864 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-28 17:02:38.280588 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-28 17:02:38.281610 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-28 17:02:38.283013 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-28 17:02:38.283354 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-28 17:02:38.284549 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-28 17:02:38.285593 | orchestrator | 2025-05-28 17:02:38.286310 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-05-28 17:02:38.288110 | orchestrator | Wednesday 28 May 2025 17:02:38 +0000 (0:00:01.674) 0:00:13.871 ********* 2025-05-28 17:02:38.718742 | orchestrator | ok: [testbed-manager] 2025-05-28 17:02:39.007853 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:02:39.434803 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:02:39.434919 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:02:39.439007 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:02:39.439030 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:02:39.439040 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:02:39.439082 | orchestrator | 2025-05-28 17:02:39.439764 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-05-28 17:02:39.440432 | orchestrator | Wednesday 28 May 2025 17:02:39 +0000 (0:00:01.150) 0:00:15.022 ********* 2025-05-28 17:02:39.611526 | orchestrator | skipping: [testbed-manager] 2025-05-28 17:02:39.695388 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:02:39.779186 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:02:39.860277 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:02:39.941524 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:02:40.086825 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:02:40.087371 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:02:40.088232 | orchestrator | 2025-05-28 17:02:40.092016 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-05-28 17:02:40.092047 | orchestrator | Wednesday 28 May 2025 17:02:40 +0000 (0:00:00.654) 0:00:15.676 ********* 2025-05-28 17:02:42.153499 | orchestrator | ok: [testbed-manager] 2025-05-28 17:02:42.154070 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:02:42.155386 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:02:42.156944 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:02:42.158081 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:02:42.159621 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:02:42.160094 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:02:42.161059 | orchestrator | 2025-05-28 17:02:42.161496 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-05-28 17:02:42.162356 | orchestrator | Wednesday 28 May 2025 17:02:42 +0000 (0:00:02.064) 0:00:17.741 ********* 2025-05-28 17:02:42.407836 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:02:42.492723 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:02:42.579216 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:02:42.658480 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:02:43.045002 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:02:43.045179 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:02:43.046415 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-05-28 17:02:43.047176 | orchestrator | 2025-05-28 17:02:43.047874 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-05-28 17:02:43.048490 | orchestrator | Wednesday 28 May 2025 17:02:43 +0000 (0:00:00.895) 0:00:18.636 ********* 2025-05-28 17:02:44.691562 | orchestrator | ok: [testbed-manager] 2025-05-28 17:02:44.692129 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:02:44.693084 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:02:44.697387 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:02:44.697446 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:02:44.697515 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:02:44.697636 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:02:44.698913 | orchestrator | 2025-05-28 17:02:44.700197 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-05-28 17:02:44.700547 | orchestrator | Wednesday 28 May 2025 17:02:44 +0000 (0:00:01.637) 0:00:20.274 ********* 2025-05-28 17:02:45.992975 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 17:02:45.993196 | orchestrator | 2025-05-28 17:02:45.994285 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-05-28 17:02:45.994836 | orchestrator | Wednesday 28 May 2025 17:02:45 +0000 (0:00:01.307) 0:00:21.582 ********* 2025-05-28 17:02:46.818243 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:02:47.214866 | orchestrator | ok: [testbed-manager] 2025-05-28 17:02:47.215657 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:02:47.217038 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:02:47.220502 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:02:47.220541 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:02:47.220588 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:02:47.220599 | orchestrator | 2025-05-28 17:02:47.220831 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-05-28 17:02:47.222664 | orchestrator | Wednesday 28 May 2025 17:02:47 +0000 (0:00:01.221) 0:00:22.804 ********* 2025-05-28 17:02:47.381575 | orchestrator | ok: [testbed-manager] 2025-05-28 17:02:47.481115 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:02:47.577761 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:02:47.666775 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:02:47.746990 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:02:47.891486 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:02:47.891671 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:02:47.892689 | orchestrator | 2025-05-28 17:02:47.893367 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-05-28 17:02:47.897070 | orchestrator | Wednesday 28 May 2025 17:02:47 +0000 (0:00:00.680) 0:00:23.485 ********* 2025-05-28 17:02:48.554700 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-28 17:02:48.554893 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-05-28 17:02:48.555906 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-28 17:02:48.556886 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-05-28 17:02:48.557214 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-28 17:02:48.559736 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-05-28 17:02:48.560173 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-28 17:02:48.560680 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-05-28 17:02:48.660771 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-28 17:02:48.660955 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-05-28 17:02:49.084045 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-28 17:02:49.084730 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-05-28 17:02:49.088927 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-28 17:02:49.088975 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-05-28 17:02:49.088988 | orchestrator | 2025-05-28 17:02:49.089700 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-05-28 17:02:49.090763 | orchestrator | Wednesday 28 May 2025 17:02:49 +0000 (0:00:01.187) 0:00:24.673 ********* 2025-05-28 17:02:49.249235 | orchestrator | skipping: [testbed-manager] 2025-05-28 17:02:49.331618 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:02:49.413890 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:02:49.508290 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:02:49.588909 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:02:49.706893 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:02:49.707058 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:02:49.707463 | orchestrator | 2025-05-28 17:02:49.708148 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-05-28 17:02:49.709639 | orchestrator | Wednesday 28 May 2025 17:02:49 +0000 (0:00:00.626) 0:00:25.299 ********* 2025-05-28 17:02:53.304920 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-0, testbed-manager, testbed-node-1, testbed-node-3, testbed-node-2, testbed-node-5, testbed-node-4 2025-05-28 17:02:53.305699 | orchestrator | 2025-05-28 17:02:53.307552 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-05-28 17:02:53.308924 | orchestrator | Wednesday 28 May 2025 17:02:53 +0000 (0:00:03.594) 0:00:28.893 ********* 2025-05-28 17:02:57.981816 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-05-28 17:02:58.089584 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-05-28 17:02:58.089666 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-05-28 17:02:58.089680 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-05-28 17:02:58.089692 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-05-28 17:02:58.089704 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-05-28 17:02:58.089714 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-05-28 17:02:58.089744 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-05-28 17:02:58.089756 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-05-28 17:02:58.089767 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-05-28 17:02:58.089786 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-05-28 17:02:58.089797 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-05-28 17:02:58.089839 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-05-28 17:02:58.089851 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-05-28 17:02:58.089863 | orchestrator | 2025-05-28 17:02:58.089875 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-05-28 17:02:58.089897 | orchestrator | Wednesday 28 May 2025 17:02:57 +0000 (0:00:04.678) 0:00:33.572 ********* 2025-05-28 17:03:02.752892 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-05-28 17:03:02.755378 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-05-28 17:03:02.757508 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-05-28 17:03:02.758186 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-05-28 17:03:02.759356 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-05-28 17:03:02.760570 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-05-28 17:03:02.761413 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-05-28 17:03:02.763545 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-05-28 17:03:02.763596 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-05-28 17:03:02.763609 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-05-28 17:03:02.763968 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-05-28 17:03:02.764949 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-05-28 17:03:02.765172 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-05-28 17:03:02.765987 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-05-28 17:03:02.766512 | orchestrator | 2025-05-28 17:03:02.767293 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-05-28 17:03:02.767694 | orchestrator | Wednesday 28 May 2025 17:03:02 +0000 (0:00:04.768) 0:00:38.341 ********* 2025-05-28 17:03:04.016589 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 17:03:04.017939 | orchestrator | 2025-05-28 17:03:04.017980 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-05-28 17:03:04.019264 | orchestrator | Wednesday 28 May 2025 17:03:04 +0000 (0:00:01.264) 0:00:39.606 ********* 2025-05-28 17:03:04.465690 | orchestrator | ok: [testbed-manager] 2025-05-28 17:03:04.999097 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:03:04.999518 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:03:05.000482 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:03:05.001404 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:03:05.002749 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:03:05.004006 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:03:05.004721 | orchestrator | 2025-05-28 17:03:05.005263 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-05-28 17:03:05.005997 | orchestrator | Wednesday 28 May 2025 17:03:04 +0000 (0:00:00.986) 0:00:40.592 ********* 2025-05-28 17:03:05.092667 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-05-28 17:03:05.093001 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-05-28 17:03:05.093938 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-05-28 17:03:05.187712 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-05-28 17:03:05.187960 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-05-28 17:03:05.188410 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-05-28 17:03:05.189736 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-05-28 17:03:05.190240 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-05-28 17:03:05.283966 | orchestrator | skipping: [testbed-manager] 2025-05-28 17:03:05.285524 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-05-28 17:03:05.286571 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-05-28 17:03:05.289823 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-05-28 17:03:05.289851 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-05-28 17:03:05.575711 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:03:05.576947 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-05-28 17:03:05.577635 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-05-28 17:03:05.578277 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-05-28 17:03:05.579180 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-05-28 17:03:05.680399 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:03:05.681379 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-05-28 17:03:05.682377 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-05-28 17:03:05.683376 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-05-28 17:03:05.684175 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-05-28 17:03:05.787559 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:03:05.788604 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-05-28 17:03:05.789924 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-05-28 17:03:05.791156 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-05-28 17:03:05.791704 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-05-28 17:03:07.020594 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:03:07.021895 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:03:07.022167 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-05-28 17:03:07.024171 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-05-28 17:03:07.025443 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-05-28 17:03:07.026170 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-05-28 17:03:07.027050 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:03:07.027837 | orchestrator | 2025-05-28 17:03:07.028469 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-05-28 17:03:07.029141 | orchestrator | Wednesday 28 May 2025 17:03:07 +0000 (0:00:02.017) 0:00:42.609 ********* 2025-05-28 17:03:07.186390 | orchestrator | skipping: [testbed-manager] 2025-05-28 17:03:07.293719 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:03:07.375077 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:03:07.454700 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:03:07.541768 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:03:07.667811 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:03:07.668511 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:03:07.669594 | orchestrator | 2025-05-28 17:03:07.671630 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-05-28 17:03:07.672697 | orchestrator | Wednesday 28 May 2025 17:03:07 +0000 (0:00:00.651) 0:00:43.261 ********* 2025-05-28 17:03:08.023963 | orchestrator | skipping: [testbed-manager] 2025-05-28 17:03:08.109512 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:03:08.193956 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:03:08.273857 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:03:08.358811 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:03:08.403439 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:03:08.403602 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:03:08.404332 | orchestrator | 2025-05-28 17:03:08.407092 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 17:03:08.407213 | orchestrator | 2025-05-28 17:03:08 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-28 17:03:08.407233 | orchestrator | 2025-05-28 17:03:08 | INFO  | Please wait and do not abort execution. 2025-05-28 17:03:08.407442 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-28 17:03:08.407468 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-28 17:03:08.407479 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-28 17:03:08.407491 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-28 17:03:08.407501 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-28 17:03:08.407547 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-28 17:03:08.408114 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-28 17:03:08.408186 | orchestrator | 2025-05-28 17:03:08.408880 | orchestrator | 2025-05-28 17:03:08.413010 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 17:03:08.414539 | orchestrator | Wednesday 28 May 2025 17:03:08 +0000 (0:00:00.735) 0:00:43.996 ********* 2025-05-28 17:03:08.414829 | orchestrator | =============================================================================== 2025-05-28 17:03:08.415472 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 4.77s 2025-05-28 17:03:08.415754 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 4.68s 2025-05-28 17:03:08.416264 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 3.59s 2025-05-28 17:03:08.416778 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.47s 2025-05-28 17:03:08.417069 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.09s 2025-05-28 17:03:08.417502 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.06s 2025-05-28 17:03:08.417808 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.02s 2025-05-28 17:03:08.418219 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.74s 2025-05-28 17:03:08.418725 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.69s 2025-05-28 17:03:08.420515 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.67s 2025-05-28 17:03:08.421615 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.64s 2025-05-28 17:03:08.422660 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.31s 2025-05-28 17:03:08.423657 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.26s 2025-05-28 17:03:08.424256 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.22s 2025-05-28 17:03:08.424844 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.22s 2025-05-28 17:03:08.425469 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.19s 2025-05-28 17:03:08.426759 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.15s 2025-05-28 17:03:08.427261 | orchestrator | osism.commons.network : Create required directories --------------------- 0.99s 2025-05-28 17:03:08.427795 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.99s 2025-05-28 17:03:08.428262 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.90s 2025-05-28 17:03:09.020611 | orchestrator | + osism apply wireguard 2025-05-28 17:03:10.710485 | orchestrator | Registering Redlock._acquired_script 2025-05-28 17:03:10.710554 | orchestrator | Registering Redlock._extend_script 2025-05-28 17:03:10.710568 | orchestrator | Registering Redlock._release_script 2025-05-28 17:03:10.770964 | orchestrator | 2025-05-28 17:03:10 | INFO  | Task 95ec95bb-4aae-4eee-8d0e-60f65ba84cdf (wireguard) was prepared for execution. 2025-05-28 17:03:10.771065 | orchestrator | 2025-05-28 17:03:10 | INFO  | It takes a moment until task 95ec95bb-4aae-4eee-8d0e-60f65ba84cdf (wireguard) has been started and output is visible here. 2025-05-28 17:03:14.788566 | orchestrator | 2025-05-28 17:03:14.790576 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-05-28 17:03:14.791246 | orchestrator | 2025-05-28 17:03:14.791898 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-05-28 17:03:14.792633 | orchestrator | Wednesday 28 May 2025 17:03:14 +0000 (0:00:00.223) 0:00:00.223 ********* 2025-05-28 17:03:16.277864 | orchestrator | ok: [testbed-manager] 2025-05-28 17:03:16.280126 | orchestrator | 2025-05-28 17:03:16.280216 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-05-28 17:03:16.280233 | orchestrator | Wednesday 28 May 2025 17:03:16 +0000 (0:00:01.492) 0:00:01.716 ********* 2025-05-28 17:03:22.649476 | orchestrator | changed: [testbed-manager] 2025-05-28 17:03:22.651172 | orchestrator | 2025-05-28 17:03:22.651713 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-05-28 17:03:22.653070 | orchestrator | Wednesday 28 May 2025 17:03:22 +0000 (0:00:06.369) 0:00:08.085 ********* 2025-05-28 17:03:23.201225 | orchestrator | changed: [testbed-manager] 2025-05-28 17:03:23.201382 | orchestrator | 2025-05-28 17:03:23.202221 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-05-28 17:03:23.207103 | orchestrator | Wednesday 28 May 2025 17:03:23 +0000 (0:00:00.553) 0:00:08.638 ********* 2025-05-28 17:03:23.625617 | orchestrator | changed: [testbed-manager] 2025-05-28 17:03:23.625824 | orchestrator | 2025-05-28 17:03:23.626459 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-05-28 17:03:23.627158 | orchestrator | Wednesday 28 May 2025 17:03:23 +0000 (0:00:00.426) 0:00:09.064 ********* 2025-05-28 17:03:24.257308 | orchestrator | ok: [testbed-manager] 2025-05-28 17:03:24.258542 | orchestrator | 2025-05-28 17:03:24.258581 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-05-28 17:03:24.258644 | orchestrator | Wednesday 28 May 2025 17:03:24 +0000 (0:00:00.629) 0:00:09.694 ********* 2025-05-28 17:03:24.662736 | orchestrator | ok: [testbed-manager] 2025-05-28 17:03:24.663276 | orchestrator | 2025-05-28 17:03:24.664078 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-05-28 17:03:24.664926 | orchestrator | Wednesday 28 May 2025 17:03:24 +0000 (0:00:00.407) 0:00:10.102 ********* 2025-05-28 17:03:25.102212 | orchestrator | ok: [testbed-manager] 2025-05-28 17:03:25.104138 | orchestrator | 2025-05-28 17:03:25.105449 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-05-28 17:03:25.106415 | orchestrator | Wednesday 28 May 2025 17:03:25 +0000 (0:00:00.438) 0:00:10.540 ********* 2025-05-28 17:03:26.309269 | orchestrator | changed: [testbed-manager] 2025-05-28 17:03:26.310075 | orchestrator | 2025-05-28 17:03:26.310663 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-05-28 17:03:26.311711 | orchestrator | Wednesday 28 May 2025 17:03:26 +0000 (0:00:01.205) 0:00:11.746 ********* 2025-05-28 17:03:27.214841 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-28 17:03:27.215700 | orchestrator | changed: [testbed-manager] 2025-05-28 17:03:27.216533 | orchestrator | 2025-05-28 17:03:27.216972 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-05-28 17:03:27.218464 | orchestrator | Wednesday 28 May 2025 17:03:27 +0000 (0:00:00.900) 0:00:12.647 ********* 2025-05-28 17:03:28.921883 | orchestrator | changed: [testbed-manager] 2025-05-28 17:03:28.922236 | orchestrator | 2025-05-28 17:03:28.923290 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-05-28 17:03:28.925026 | orchestrator | Wednesday 28 May 2025 17:03:28 +0000 (0:00:01.710) 0:00:14.358 ********* 2025-05-28 17:03:29.835008 | orchestrator | changed: [testbed-manager] 2025-05-28 17:03:29.836281 | orchestrator | 2025-05-28 17:03:29.836795 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 17:03:29.837103 | orchestrator | 2025-05-28 17:03:29 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-28 17:03:29.837471 | orchestrator | 2025-05-28 17:03:29 | INFO  | Please wait and do not abort execution. 2025-05-28 17:03:29.838359 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 17:03:29.838980 | orchestrator | 2025-05-28 17:03:29.839911 | orchestrator | 2025-05-28 17:03:29.840572 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 17:03:29.841523 | orchestrator | Wednesday 28 May 2025 17:03:29 +0000 (0:00:00.915) 0:00:15.274 ********* 2025-05-28 17:03:29.842198 | orchestrator | =============================================================================== 2025-05-28 17:03:29.842866 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.37s 2025-05-28 17:03:29.843674 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.71s 2025-05-28 17:03:29.844213 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.49s 2025-05-28 17:03:29.845142 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.21s 2025-05-28 17:03:29.845560 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.92s 2025-05-28 17:03:29.846278 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.90s 2025-05-28 17:03:29.846795 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.63s 2025-05-28 17:03:29.847574 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.55s 2025-05-28 17:03:29.848137 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.44s 2025-05-28 17:03:29.848693 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.43s 2025-05-28 17:03:29.849138 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.41s 2025-05-28 17:03:30.452449 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-05-28 17:03:30.485192 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-05-28 17:03:30.485279 | orchestrator | Dload Upload Total Spent Left Speed 2025-05-28 17:03:30.567173 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 182 0 --:--:-- --:--:-- --:--:-- 185 2025-05-28 17:03:30.581533 | orchestrator | + osism apply --environment custom workarounds 2025-05-28 17:03:32.244767 | orchestrator | 2025-05-28 17:03:32 | INFO  | Trying to run play workarounds in environment custom 2025-05-28 17:03:32.249507 | orchestrator | Registering Redlock._acquired_script 2025-05-28 17:03:32.249565 | orchestrator | Registering Redlock._extend_script 2025-05-28 17:03:32.249579 | orchestrator | Registering Redlock._release_script 2025-05-28 17:03:32.305797 | orchestrator | 2025-05-28 17:03:32 | INFO  | Task 0a0c2721-2979-45e5-87c3-79c51074bdd4 (workarounds) was prepared for execution. 2025-05-28 17:03:32.305858 | orchestrator | 2025-05-28 17:03:32 | INFO  | It takes a moment until task 0a0c2721-2979-45e5-87c3-79c51074bdd4 (workarounds) has been started and output is visible here. 2025-05-28 17:03:36.236921 | orchestrator | 2025-05-28 17:03:36.237665 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-28 17:03:36.242398 | orchestrator | 2025-05-28 17:03:36.242888 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-05-28 17:03:36.243883 | orchestrator | Wednesday 28 May 2025 17:03:36 +0000 (0:00:00.152) 0:00:00.152 ********* 2025-05-28 17:03:36.400268 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-05-28 17:03:36.481925 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-05-28 17:03:36.562529 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-05-28 17:03:36.652957 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-05-28 17:03:36.828207 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-05-28 17:03:36.989803 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-05-28 17:03:36.990883 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-05-28 17:03:36.993619 | orchestrator | 2025-05-28 17:03:36.993669 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-05-28 17:03:36.994403 | orchestrator | 2025-05-28 17:03:36.994846 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-05-28 17:03:36.995392 | orchestrator | Wednesday 28 May 2025 17:03:36 +0000 (0:00:00.755) 0:00:00.908 ********* 2025-05-28 17:03:39.678192 | orchestrator | ok: [testbed-manager] 2025-05-28 17:03:39.678861 | orchestrator | 2025-05-28 17:03:39.679218 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-05-28 17:03:39.680826 | orchestrator | 2025-05-28 17:03:39.683165 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-05-28 17:03:39.683205 | orchestrator | Wednesday 28 May 2025 17:03:39 +0000 (0:00:02.679) 0:00:03.588 ********* 2025-05-28 17:03:41.567867 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:03:41.568686 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:03:41.569531 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:03:41.571151 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:03:41.571994 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:03:41.572832 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:03:41.573644 | orchestrator | 2025-05-28 17:03:41.574825 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-05-28 17:03:41.575607 | orchestrator | 2025-05-28 17:03:41.576292 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-05-28 17:03:41.577512 | orchestrator | Wednesday 28 May 2025 17:03:41 +0000 (0:00:01.895) 0:00:05.483 ********* 2025-05-28 17:03:43.139490 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-28 17:03:43.139681 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-28 17:03:43.140688 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-28 17:03:43.142137 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-28 17:03:43.143051 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-28 17:03:43.144819 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-28 17:03:43.145625 | orchestrator | 2025-05-28 17:03:43.146480 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-05-28 17:03:43.147338 | orchestrator | Wednesday 28 May 2025 17:03:43 +0000 (0:00:01.571) 0:00:07.055 ********* 2025-05-28 17:03:46.821310 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:03:46.822227 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:03:46.822865 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:03:46.823032 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:03:46.824223 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:03:46.824295 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:03:46.825883 | orchestrator | 2025-05-28 17:03:46.825940 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-05-28 17:03:46.825954 | orchestrator | Wednesday 28 May 2025 17:03:46 +0000 (0:00:03.683) 0:00:10.738 ********* 2025-05-28 17:03:46.987200 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:03:47.062313 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:03:47.141672 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:03:47.223575 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:03:47.523432 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:03:47.526653 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:03:47.529556 | orchestrator | 2025-05-28 17:03:47.530729 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-05-28 17:03:47.532699 | orchestrator | 2025-05-28 17:03:47.533877 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-05-28 17:03:47.535539 | orchestrator | Wednesday 28 May 2025 17:03:47 +0000 (0:00:00.702) 0:00:11.441 ********* 2025-05-28 17:03:49.184317 | orchestrator | changed: [testbed-manager] 2025-05-28 17:03:49.184955 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:03:49.186156 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:03:49.188586 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:03:49.189728 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:03:49.190451 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:03:49.193185 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:03:49.195585 | orchestrator | 2025-05-28 17:03:49.196999 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-05-28 17:03:49.198465 | orchestrator | Wednesday 28 May 2025 17:03:49 +0000 (0:00:01.659) 0:00:13.101 ********* 2025-05-28 17:03:50.753678 | orchestrator | changed: [testbed-manager] 2025-05-28 17:03:50.757381 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:03:50.757413 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:03:50.757425 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:03:50.757436 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:03:50.758535 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:03:50.759895 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:03:50.760139 | orchestrator | 2025-05-28 17:03:50.762884 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-05-28 17:03:50.763710 | orchestrator | Wednesday 28 May 2025 17:03:50 +0000 (0:00:01.566) 0:00:14.667 ********* 2025-05-28 17:03:52.248787 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:03:52.250228 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:03:52.252259 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:03:52.253932 | orchestrator | ok: [testbed-manager] 2025-05-28 17:03:52.254556 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:03:52.255489 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:03:52.255864 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:03:52.256603 | orchestrator | 2025-05-28 17:03:52.257124 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-05-28 17:03:52.257766 | orchestrator | Wednesday 28 May 2025 17:03:52 +0000 (0:00:01.493) 0:00:16.161 ********* 2025-05-28 17:03:54.017213 | orchestrator | changed: [testbed-manager] 2025-05-28 17:03:54.018251 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:03:54.019910 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:03:54.020833 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:03:54.022119 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:03:54.023057 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:03:54.025008 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:03:54.025032 | orchestrator | 2025-05-28 17:03:54.026010 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-05-28 17:03:54.026671 | orchestrator | Wednesday 28 May 2025 17:03:54 +0000 (0:00:01.769) 0:00:17.930 ********* 2025-05-28 17:03:54.185358 | orchestrator | skipping: [testbed-manager] 2025-05-28 17:03:54.261598 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:03:54.346393 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:03:54.425201 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:03:54.503261 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:03:54.626605 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:03:54.628797 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:03:54.629516 | orchestrator | 2025-05-28 17:03:54.631993 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-05-28 17:03:54.633132 | orchestrator | 2025-05-28 17:03:54.634376 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-05-28 17:03:54.635883 | orchestrator | Wednesday 28 May 2025 17:03:54 +0000 (0:00:00.612) 0:00:18.543 ********* 2025-05-28 17:03:57.413582 | orchestrator | ok: [testbed-manager] 2025-05-28 17:03:57.415325 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:03:57.417434 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:03:57.418078 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:03:57.418742 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:03:57.420618 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:03:57.421069 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:03:57.421599 | orchestrator | 2025-05-28 17:03:57.422531 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 17:03:57.422633 | orchestrator | 2025-05-28 17:03:57 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-28 17:03:57.422761 | orchestrator | 2025-05-28 17:03:57 | INFO  | Please wait and do not abort execution. 2025-05-28 17:03:57.423723 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-28 17:03:57.424097 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 17:03:57.424395 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 17:03:57.425048 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 17:03:57.425406 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 17:03:57.425895 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 17:03:57.426599 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 17:03:57.426801 | orchestrator | 2025-05-28 17:03:57.427183 | orchestrator | 2025-05-28 17:03:57.427672 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 17:03:57.428054 | orchestrator | Wednesday 28 May 2025 17:03:57 +0000 (0:00:02.787) 0:00:21.330 ********* 2025-05-28 17:03:57.428430 | orchestrator | =============================================================================== 2025-05-28 17:03:57.428855 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.68s 2025-05-28 17:03:57.429153 | orchestrator | Install python3-docker -------------------------------------------------- 2.79s 2025-05-28 17:03:57.429484 | orchestrator | Apply netplan configuration --------------------------------------------- 2.68s 2025-05-28 17:03:57.430172 | orchestrator | Apply netplan configuration --------------------------------------------- 1.90s 2025-05-28 17:03:57.430765 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.77s 2025-05-28 17:03:57.431210 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.66s 2025-05-28 17:03:57.431679 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.57s 2025-05-28 17:03:57.433575 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.57s 2025-05-28 17:03:57.437090 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.49s 2025-05-28 17:03:57.437573 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.76s 2025-05-28 17:03:57.438399 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.70s 2025-05-28 17:03:57.438735 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.61s 2025-05-28 17:03:58.030462 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-05-28 17:03:59.679266 | orchestrator | Registering Redlock._acquired_script 2025-05-28 17:03:59.679449 | orchestrator | Registering Redlock._extend_script 2025-05-28 17:03:59.679474 | orchestrator | Registering Redlock._release_script 2025-05-28 17:03:59.736907 | orchestrator | 2025-05-28 17:03:59 | INFO  | Task d08993c7-c971-4e76-82d6-e2a99b9de77c (reboot) was prepared for execution. 2025-05-28 17:03:59.737013 | orchestrator | 2025-05-28 17:03:59 | INFO  | It takes a moment until task d08993c7-c971-4e76-82d6-e2a99b9de77c (reboot) has been started and output is visible here. 2025-05-28 17:04:03.696815 | orchestrator | 2025-05-28 17:04:03.697469 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-28 17:04:03.698218 | orchestrator | 2025-05-28 17:04:03.699493 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-28 17:04:03.699899 | orchestrator | Wednesday 28 May 2025 17:04:03 +0000 (0:00:00.158) 0:00:00.158 ********* 2025-05-28 17:04:03.770784 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:04:03.771816 | orchestrator | 2025-05-28 17:04:03.771849 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-28 17:04:03.772551 | orchestrator | Wednesday 28 May 2025 17:04:03 +0000 (0:00:00.076) 0:00:00.235 ********* 2025-05-28 17:04:04.646475 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:04:04.646604 | orchestrator | 2025-05-28 17:04:04.646692 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-28 17:04:04.647514 | orchestrator | Wednesday 28 May 2025 17:04:04 +0000 (0:00:00.873) 0:00:01.109 ********* 2025-05-28 17:04:04.749801 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:04:04.750276 | orchestrator | 2025-05-28 17:04:04.750476 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-28 17:04:04.750975 | orchestrator | 2025-05-28 17:04:04.751428 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-28 17:04:04.751772 | orchestrator | Wednesday 28 May 2025 17:04:04 +0000 (0:00:00.101) 0:00:01.211 ********* 2025-05-28 17:04:04.830455 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:04:04.830569 | orchestrator | 2025-05-28 17:04:04.830582 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-28 17:04:04.830651 | orchestrator | Wednesday 28 May 2025 17:04:04 +0000 (0:00:00.083) 0:00:01.294 ********* 2025-05-28 17:04:05.473660 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:04:05.474145 | orchestrator | 2025-05-28 17:04:05.474738 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-28 17:04:05.475916 | orchestrator | Wednesday 28 May 2025 17:04:05 +0000 (0:00:00.643) 0:00:01.937 ********* 2025-05-28 17:04:05.558286 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:04:05.558929 | orchestrator | 2025-05-28 17:04:05.559492 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-28 17:04:05.560212 | orchestrator | 2025-05-28 17:04:05.561522 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-28 17:04:05.561554 | orchestrator | Wednesday 28 May 2025 17:04:05 +0000 (0:00:00.086) 0:00:02.023 ********* 2025-05-28 17:04:05.718261 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:04:05.721010 | orchestrator | 2025-05-28 17:04:05.721073 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-28 17:04:05.721097 | orchestrator | Wednesday 28 May 2025 17:04:05 +0000 (0:00:00.158) 0:00:02.182 ********* 2025-05-28 17:04:06.407455 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:04:06.409305 | orchestrator | 2025-05-28 17:04:06.409377 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-28 17:04:06.410262 | orchestrator | Wednesday 28 May 2025 17:04:06 +0000 (0:00:00.689) 0:00:02.871 ********* 2025-05-28 17:04:06.506143 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:04:06.508211 | orchestrator | 2025-05-28 17:04:06.508839 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-28 17:04:06.510084 | orchestrator | 2025-05-28 17:04:06.513085 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-28 17:04:06.513228 | orchestrator | Wednesday 28 May 2025 17:04:06 +0000 (0:00:00.099) 0:00:02.971 ********* 2025-05-28 17:04:06.583721 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:04:06.583861 | orchestrator | 2025-05-28 17:04:06.583931 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-28 17:04:06.583948 | orchestrator | Wednesday 28 May 2025 17:04:06 +0000 (0:00:00.077) 0:00:03.048 ********* 2025-05-28 17:04:07.256200 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:04:07.256621 | orchestrator | 2025-05-28 17:04:07.258140 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-28 17:04:07.258473 | orchestrator | Wednesday 28 May 2025 17:04:07 +0000 (0:00:00.670) 0:00:03.719 ********* 2025-05-28 17:04:07.364510 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:04:07.365501 | orchestrator | 2025-05-28 17:04:07.368641 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-28 17:04:07.368909 | orchestrator | 2025-05-28 17:04:07.370082 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-28 17:04:07.370494 | orchestrator | Wednesday 28 May 2025 17:04:07 +0000 (0:00:00.107) 0:00:03.826 ********* 2025-05-28 17:04:07.461591 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:04:07.462130 | orchestrator | 2025-05-28 17:04:07.462200 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-28 17:04:07.463097 | orchestrator | Wednesday 28 May 2025 17:04:07 +0000 (0:00:00.099) 0:00:03.926 ********* 2025-05-28 17:04:08.165198 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:04:08.166105 | orchestrator | 2025-05-28 17:04:08.166605 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-28 17:04:08.167796 | orchestrator | Wednesday 28 May 2025 17:04:08 +0000 (0:00:00.701) 0:00:04.627 ********* 2025-05-28 17:04:08.300142 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:04:08.300955 | orchestrator | 2025-05-28 17:04:08.302177 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-28 17:04:08.304095 | orchestrator | 2025-05-28 17:04:08.304754 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-28 17:04:08.305553 | orchestrator | Wednesday 28 May 2025 17:04:08 +0000 (0:00:00.133) 0:00:04.761 ********* 2025-05-28 17:04:08.406142 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:04:08.406265 | orchestrator | 2025-05-28 17:04:08.406584 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-28 17:04:08.407166 | orchestrator | Wednesday 28 May 2025 17:04:08 +0000 (0:00:00.108) 0:00:04.870 ********* 2025-05-28 17:04:09.088790 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:04:09.089815 | orchestrator | 2025-05-28 17:04:09.090505 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-28 17:04:09.091391 | orchestrator | Wednesday 28 May 2025 17:04:09 +0000 (0:00:00.681) 0:00:05.551 ********* 2025-05-28 17:04:09.127735 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:04:09.128184 | orchestrator | 2025-05-28 17:04:09.128880 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 17:04:09.129311 | orchestrator | 2025-05-28 17:04:09 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-28 17:04:09.129590 | orchestrator | 2025-05-28 17:04:09 | INFO  | Please wait and do not abort execution. 2025-05-28 17:04:09.130714 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 17:04:09.131856 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 17:04:09.132275 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 17:04:09.132855 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 17:04:09.133505 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 17:04:09.134258 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 17:04:09.134613 | orchestrator | 2025-05-28 17:04:09.134972 | orchestrator | 2025-05-28 17:04:09.135422 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 17:04:09.136123 | orchestrator | Wednesday 28 May 2025 17:04:09 +0000 (0:00:00.040) 0:00:05.591 ********* 2025-05-28 17:04:09.136598 | orchestrator | =============================================================================== 2025-05-28 17:04:09.136931 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.26s 2025-05-28 17:04:09.137190 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.61s 2025-05-28 17:04:09.137647 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.57s 2025-05-28 17:04:09.661600 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-05-28 17:04:11.320555 | orchestrator | Registering Redlock._acquired_script 2025-05-28 17:04:11.320664 | orchestrator | Registering Redlock._extend_script 2025-05-28 17:04:11.320677 | orchestrator | Registering Redlock._release_script 2025-05-28 17:04:11.378090 | orchestrator | 2025-05-28 17:04:11 | INFO  | Task 4e9ed90a-3f62-447c-98b5-993f04e2a23d (wait-for-connection) was prepared for execution. 2025-05-28 17:04:11.378146 | orchestrator | 2025-05-28 17:04:11 | INFO  | It takes a moment until task 4e9ed90a-3f62-447c-98b5-993f04e2a23d (wait-for-connection) has been started and output is visible here. 2025-05-28 17:04:15.513698 | orchestrator | 2025-05-28 17:04:15.516047 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-05-28 17:04:15.516097 | orchestrator | 2025-05-28 17:04:15.518261 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-05-28 17:04:15.519175 | orchestrator | Wednesday 28 May 2025 17:04:15 +0000 (0:00:00.239) 0:00:00.239 ********* 2025-05-28 17:04:27.897035 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:04:27.897189 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:04:27.897425 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:04:27.898196 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:04:27.899265 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:04:27.900192 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:04:27.900621 | orchestrator | 2025-05-28 17:04:27.900939 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 17:04:27.901387 | orchestrator | 2025-05-28 17:04:27 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-28 17:04:27.901520 | orchestrator | 2025-05-28 17:04:27 | INFO  | Please wait and do not abort execution. 2025-05-28 17:04:27.902163 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 17:04:27.902643 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 17:04:27.903010 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 17:04:27.903544 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 17:04:27.904226 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 17:04:27.904514 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 17:04:27.904861 | orchestrator | 2025-05-28 17:04:27.905263 | orchestrator | 2025-05-28 17:04:27.905969 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 17:04:27.906124 | orchestrator | Wednesday 28 May 2025 17:04:27 +0000 (0:00:12.382) 0:00:12.622 ********* 2025-05-28 17:04:27.906472 | orchestrator | =============================================================================== 2025-05-28 17:04:27.907841 | orchestrator | Wait until remote system is reachable ---------------------------------- 12.38s 2025-05-28 17:04:28.471165 | orchestrator | + osism apply hddtemp 2025-05-28 17:04:30.085946 | orchestrator | Registering Redlock._acquired_script 2025-05-28 17:04:30.086091 | orchestrator | Registering Redlock._extend_script 2025-05-28 17:04:30.086102 | orchestrator | Registering Redlock._release_script 2025-05-28 17:04:30.146535 | orchestrator | 2025-05-28 17:04:30 | INFO  | Task 0109410a-1405-4717-a15a-efed35a29b8f (hddtemp) was prepared for execution. 2025-05-28 17:04:30.146646 | orchestrator | 2025-05-28 17:04:30 | INFO  | It takes a moment until task 0109410a-1405-4717-a15a-efed35a29b8f (hddtemp) has been started and output is visible here. 2025-05-28 17:04:34.195808 | orchestrator | 2025-05-28 17:04:34.196224 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-05-28 17:04:34.197739 | orchestrator | 2025-05-28 17:04:34.197882 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-05-28 17:04:34.199619 | orchestrator | Wednesday 28 May 2025 17:04:34 +0000 (0:00:00.286) 0:00:00.286 ********* 2025-05-28 17:04:34.347316 | orchestrator | ok: [testbed-manager] 2025-05-28 17:04:34.423692 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:04:34.499888 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:04:34.575232 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:04:34.762426 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:04:34.888552 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:04:34.889166 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:04:34.891070 | orchestrator | 2025-05-28 17:04:34.892264 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-05-28 17:04:34.893310 | orchestrator | Wednesday 28 May 2025 17:04:34 +0000 (0:00:00.692) 0:00:00.979 ********* 2025-05-28 17:04:36.069826 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 17:04:36.073204 | orchestrator | 2025-05-28 17:04:36.073268 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-05-28 17:04:36.073397 | orchestrator | Wednesday 28 May 2025 17:04:36 +0000 (0:00:01.180) 0:00:02.159 ********* 2025-05-28 17:04:38.097554 | orchestrator | ok: [testbed-manager] 2025-05-28 17:04:38.098952 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:04:38.099029 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:04:38.099708 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:04:38.100253 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:04:38.101405 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:04:38.102630 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:04:38.105229 | orchestrator | 2025-05-28 17:04:38.105261 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-05-28 17:04:38.105272 | orchestrator | Wednesday 28 May 2025 17:04:38 +0000 (0:00:02.028) 0:00:04.187 ********* 2025-05-28 17:04:38.648934 | orchestrator | changed: [testbed-manager] 2025-05-28 17:04:38.741140 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:04:38.827621 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:04:39.293670 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:04:39.293801 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:04:39.293968 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:04:39.294431 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:04:39.294629 | orchestrator | 2025-05-28 17:04:39.296540 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-05-28 17:04:39.297977 | orchestrator | Wednesday 28 May 2025 17:04:39 +0000 (0:00:01.194) 0:00:05.382 ********* 2025-05-28 17:04:41.230441 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:04:41.230671 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:04:41.234246 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:04:41.235898 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:04:41.236743 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:04:41.238112 | orchestrator | ok: [testbed-manager] 2025-05-28 17:04:41.239778 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:04:41.239798 | orchestrator | 2025-05-28 17:04:41.240330 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-05-28 17:04:41.240692 | orchestrator | Wednesday 28 May 2025 17:04:41 +0000 (0:00:01.936) 0:00:07.319 ********* 2025-05-28 17:04:41.667166 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:04:41.744936 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:04:41.828180 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:04:41.912867 | orchestrator | changed: [testbed-manager] 2025-05-28 17:04:42.039630 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:04:42.041490 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:04:42.042438 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:04:42.043612 | orchestrator | 2025-05-28 17:04:42.045145 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-05-28 17:04:42.045969 | orchestrator | Wednesday 28 May 2025 17:04:42 +0000 (0:00:00.814) 0:00:08.133 ********* 2025-05-28 17:04:55.615687 | orchestrator | changed: [testbed-manager] 2025-05-28 17:04:55.615869 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:04:55.616284 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:04:55.618230 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:04:55.619239 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:04:55.620961 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:04:55.621084 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:04:55.621842 | orchestrator | 2025-05-28 17:04:55.622630 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-05-28 17:04:55.623360 | orchestrator | Wednesday 28 May 2025 17:04:55 +0000 (0:00:13.568) 0:00:21.701 ********* 2025-05-28 17:04:56.869932 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 17:04:56.874215 | orchestrator | 2025-05-28 17:04:56.874281 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-05-28 17:04:56.874296 | orchestrator | Wednesday 28 May 2025 17:04:56 +0000 (0:00:01.259) 0:00:22.960 ********* 2025-05-28 17:04:58.770878 | orchestrator | changed: [testbed-manager] 2025-05-28 17:04:58.772019 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:04:58.772566 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:04:58.772902 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:04:58.773997 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:04:58.775625 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:04:58.777273 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:04:58.777318 | orchestrator | 2025-05-28 17:04:58.777622 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 17:04:58.778063 | orchestrator | 2025-05-28 17:04:58 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-28 17:04:58.778332 | orchestrator | 2025-05-28 17:04:58 | INFO  | Please wait and do not abort execution. 2025-05-28 17:04:58.779036 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 17:04:58.779537 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-28 17:04:58.780247 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-28 17:04:58.781022 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-28 17:04:58.781305 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-28 17:04:58.782141 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-28 17:04:58.782349 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-28 17:04:58.788479 | orchestrator | 2025-05-28 17:04:58.788640 | orchestrator | 2025-05-28 17:04:58.791458 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 17:04:58.792184 | orchestrator | Wednesday 28 May 2025 17:04:58 +0000 (0:00:01.904) 0:00:24.865 ********* 2025-05-28 17:04:58.793639 | orchestrator | =============================================================================== 2025-05-28 17:04:58.794355 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.57s 2025-05-28 17:04:58.794684 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.03s 2025-05-28 17:04:58.795113 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.94s 2025-05-28 17:04:58.795543 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.90s 2025-05-28 17:04:58.795874 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.26s 2025-05-28 17:04:58.796202 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.19s 2025-05-28 17:04:58.796461 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.18s 2025-05-28 17:04:58.796870 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.81s 2025-05-28 17:04:58.797240 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.69s 2025-05-28 17:04:59.374342 | orchestrator | + sudo systemctl restart docker-compose@manager 2025-05-28 17:05:15.858523 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-05-28 17:05:15.858637 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-05-28 17:05:15.858652 | orchestrator | + local max_attempts=60 2025-05-28 17:05:15.858663 | orchestrator | + local name=ceph-ansible 2025-05-28 17:05:15.858673 | orchestrator | + local attempt_num=1 2025-05-28 17:05:15.858683 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-05-28 17:05:15.899091 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-28 17:05:15.899188 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-05-28 17:05:15.899203 | orchestrator | + local max_attempts=60 2025-05-28 17:05:15.899215 | orchestrator | + local name=kolla-ansible 2025-05-28 17:05:15.899226 | orchestrator | + local attempt_num=1 2025-05-28 17:05:15.899617 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-05-28 17:05:15.935529 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-28 17:05:15.935623 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-05-28 17:05:15.935636 | orchestrator | + local max_attempts=60 2025-05-28 17:05:15.935649 | orchestrator | + local name=osism-ansible 2025-05-28 17:05:15.935660 | orchestrator | + local attempt_num=1 2025-05-28 17:05:15.936023 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-05-28 17:05:15.974420 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-28 17:05:15.974516 | orchestrator | + [[ true == \t\r\u\e ]] 2025-05-28 17:05:15.974530 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-05-28 17:05:16.167698 | orchestrator | ARA in ceph-ansible already disabled. 2025-05-28 17:05:16.344688 | orchestrator | ARA in kolla-ansible already disabled. 2025-05-28 17:05:16.530514 | orchestrator | ARA in osism-ansible already disabled. 2025-05-28 17:05:16.710265 | orchestrator | ARA in osism-kubernetes already disabled. 2025-05-28 17:05:16.710972 | orchestrator | + osism apply gather-facts 2025-05-28 17:05:18.400007 | orchestrator | Registering Redlock._acquired_script 2025-05-28 17:05:18.400123 | orchestrator | Registering Redlock._extend_script 2025-05-28 17:05:18.400138 | orchestrator | Registering Redlock._release_script 2025-05-28 17:05:18.473855 | orchestrator | 2025-05-28 17:05:18 | INFO  | Task 6480de78-c56b-462c-9b9e-806823435690 (gather-facts) was prepared for execution. 2025-05-28 17:05:18.473946 | orchestrator | 2025-05-28 17:05:18 | INFO  | It takes a moment until task 6480de78-c56b-462c-9b9e-806823435690 (gather-facts) has been started and output is visible here. 2025-05-28 17:05:22.679647 | orchestrator | 2025-05-28 17:05:22.680779 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-28 17:05:22.681692 | orchestrator | 2025-05-28 17:05:22.684456 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-28 17:05:22.684788 | orchestrator | Wednesday 28 May 2025 17:05:22 +0000 (0:00:00.228) 0:00:00.228 ********* 2025-05-28 17:05:28.189795 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:05:28.189984 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:05:28.190730 | orchestrator | ok: [testbed-manager] 2025-05-28 17:05:28.191976 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:05:28.192524 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:05:28.193492 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:05:28.194301 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:05:28.196271 | orchestrator | 2025-05-28 17:05:28.196294 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-05-28 17:05:28.196308 | orchestrator | 2025-05-28 17:05:28.196785 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-05-28 17:05:28.196953 | orchestrator | Wednesday 28 May 2025 17:05:28 +0000 (0:00:05.510) 0:00:05.739 ********* 2025-05-28 17:05:28.347892 | orchestrator | skipping: [testbed-manager] 2025-05-28 17:05:28.430663 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:05:28.519030 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:05:28.597901 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:05:28.680761 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:05:28.719483 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:05:28.720755 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:05:28.721993 | orchestrator | 2025-05-28 17:05:28.724051 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 17:05:28.724326 | orchestrator | 2025-05-28 17:05:28 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-28 17:05:28.724353 | orchestrator | 2025-05-28 17:05:28 | INFO  | Please wait and do not abort execution. 2025-05-28 17:05:28.725762 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-28 17:05:28.726168 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-28 17:05:28.727607 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-28 17:05:28.729654 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-28 17:05:28.731202 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-28 17:05:28.732247 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-28 17:05:28.733302 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-28 17:05:28.734113 | orchestrator | 2025-05-28 17:05:28.734606 | orchestrator | 2025-05-28 17:05:28.735107 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 17:05:28.736092 | orchestrator | Wednesday 28 May 2025 17:05:28 +0000 (0:00:00.531) 0:00:06.270 ********* 2025-05-28 17:05:28.737099 | orchestrator | =============================================================================== 2025-05-28 17:05:28.737628 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.51s 2025-05-28 17:05:28.738523 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.53s 2025-05-28 17:05:29.376525 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-05-28 17:05:29.391516 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-05-28 17:05:29.402121 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-05-28 17:05:29.418699 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-05-28 17:05:29.432847 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-05-28 17:05:29.452079 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-05-28 17:05:29.464887 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-05-28 17:05:29.477497 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-05-28 17:05:29.489334 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-05-28 17:05:29.505687 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-05-28 17:05:29.520556 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-05-28 17:05:29.535525 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-05-28 17:05:29.552755 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-05-28 17:05:29.563566 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-05-28 17:05:29.573893 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-05-28 17:05:29.586755 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-05-28 17:05:29.599945 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-05-28 17:05:29.612786 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-05-28 17:05:29.624566 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-05-28 17:05:29.637815 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-05-28 17:05:29.656709 | orchestrator | + [[ false == \t\r\u\e ]] 2025-05-28 17:05:29.785630 | orchestrator | ok: Runtime: 0:25:37.835318 2025-05-28 17:05:29.892692 | 2025-05-28 17:05:29.892869 | TASK [Deploy services] 2025-05-28 17:05:30.427526 | orchestrator | skipping: Conditional result was False 2025-05-28 17:05:30.446166 | 2025-05-28 17:05:30.446328 | TASK [Deploy in a nutshell] 2025-05-28 17:05:31.165748 | orchestrator | + set -e 2025-05-28 17:05:31.165915 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-28 17:05:31.165928 | orchestrator | ++ export INTERACTIVE=false 2025-05-28 17:05:31.165937 | orchestrator | ++ INTERACTIVE=false 2025-05-28 17:05:31.165943 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-28 17:05:31.165948 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-28 17:05:31.165954 | orchestrator | + source /opt/manager-vars.sh 2025-05-28 17:05:31.165979 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-28 17:05:31.166000 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-28 17:05:31.166005 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-28 17:05:31.166035 | orchestrator | ++ CEPH_VERSION=reef 2025-05-28 17:05:31.166044 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-28 17:05:31.166054 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-28 17:05:31.166061 | orchestrator | ++ export MANAGER_VERSION=latest 2025-05-28 17:05:31.166073 | orchestrator | ++ MANAGER_VERSION=latest 2025-05-28 17:05:31.166079 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-28 17:05:31.166085 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-28 17:05:31.166089 | orchestrator | ++ export ARA=false 2025-05-28 17:05:31.166093 | orchestrator | ++ ARA=false 2025-05-28 17:05:31.166097 | orchestrator | ++ export TEMPEST=false 2025-05-28 17:05:31.166101 | orchestrator | ++ TEMPEST=false 2025-05-28 17:05:31.166105 | orchestrator | ++ export IS_ZUUL=true 2025-05-28 17:05:31.166109 | orchestrator | ++ IS_ZUUL=true 2025-05-28 17:05:31.166113 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.180 2025-05-28 17:05:31.166116 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.180 2025-05-28 17:05:31.166696 | orchestrator | 2025-05-28 17:05:31.166710 | orchestrator | # PULL IMAGES 2025-05-28 17:05:31.166715 | orchestrator | 2025-05-28 17:05:31.166720 | orchestrator | ++ export EXTERNAL_API=false 2025-05-28 17:05:31.166724 | orchestrator | ++ EXTERNAL_API=false 2025-05-28 17:05:31.166729 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-28 17:05:31.166733 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-28 17:05:31.166738 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-28 17:05:31.166743 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-28 17:05:31.166747 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-28 17:05:31.166751 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-28 17:05:31.166755 | orchestrator | + echo 2025-05-28 17:05:31.166760 | orchestrator | + echo '# PULL IMAGES' 2025-05-28 17:05:31.166765 | orchestrator | + echo 2025-05-28 17:05:31.167637 | orchestrator | ++ semver latest 7.0.0 2025-05-28 17:05:31.230585 | orchestrator | + [[ -1 -ge 0 ]] 2025-05-28 17:05:31.230671 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-05-28 17:05:31.230684 | orchestrator | + osism apply -r 2 -e custom pull-images 2025-05-28 17:05:32.946077 | orchestrator | 2025-05-28 17:05:32 | INFO  | Trying to run play pull-images in environment custom 2025-05-28 17:05:32.950717 | orchestrator | Registering Redlock._acquired_script 2025-05-28 17:05:32.950746 | orchestrator | Registering Redlock._extend_script 2025-05-28 17:05:32.950759 | orchestrator | Registering Redlock._release_script 2025-05-28 17:05:33.011462 | orchestrator | 2025-05-28 17:05:33 | INFO  | Task 6b6859f9-cf49-46dd-a631-48f6de5c97df (pull-images) was prepared for execution. 2025-05-28 17:05:33.011549 | orchestrator | 2025-05-28 17:05:33 | INFO  | It takes a moment until task 6b6859f9-cf49-46dd-a631-48f6de5c97df (pull-images) has been started and output is visible here. 2025-05-28 17:05:37.016744 | orchestrator | 2025-05-28 17:05:37.016895 | orchestrator | PLAY [Pull images] ************************************************************* 2025-05-28 17:05:37.017403 | orchestrator | 2025-05-28 17:05:37.018258 | orchestrator | TASK [Pull keystone image] ***************************************************** 2025-05-28 17:05:37.018987 | orchestrator | Wednesday 28 May 2025 17:05:37 +0000 (0:00:00.153) 0:00:00.153 ********* 2025-05-28 17:06:43.525971 | orchestrator | changed: [testbed-manager] 2025-05-28 17:06:43.526167 | orchestrator | 2025-05-28 17:06:43.526189 | orchestrator | TASK [Pull other images] ******************************************************* 2025-05-28 17:06:43.526233 | orchestrator | Wednesday 28 May 2025 17:06:43 +0000 (0:01:06.513) 0:01:06.666 ********* 2025-05-28 17:07:36.304528 | orchestrator | changed: [testbed-manager] => (item=aodh) 2025-05-28 17:07:36.304748 | orchestrator | changed: [testbed-manager] => (item=barbican) 2025-05-28 17:07:36.304770 | orchestrator | changed: [testbed-manager] => (item=ceilometer) 2025-05-28 17:07:36.304785 | orchestrator | changed: [testbed-manager] => (item=cinder) 2025-05-28 17:07:36.304796 | orchestrator | changed: [testbed-manager] => (item=common) 2025-05-28 17:07:36.304861 | orchestrator | changed: [testbed-manager] => (item=designate) 2025-05-28 17:07:36.305889 | orchestrator | changed: [testbed-manager] => (item=glance) 2025-05-28 17:07:36.307005 | orchestrator | changed: [testbed-manager] => (item=grafana) 2025-05-28 17:07:36.308028 | orchestrator | changed: [testbed-manager] => (item=horizon) 2025-05-28 17:07:36.308941 | orchestrator | changed: [testbed-manager] => (item=ironic) 2025-05-28 17:07:36.309625 | orchestrator | changed: [testbed-manager] => (item=loadbalancer) 2025-05-28 17:07:36.310340 | orchestrator | changed: [testbed-manager] => (item=magnum) 2025-05-28 17:07:36.311160 | orchestrator | changed: [testbed-manager] => (item=mariadb) 2025-05-28 17:07:36.311672 | orchestrator | changed: [testbed-manager] => (item=memcached) 2025-05-28 17:07:36.311983 | orchestrator | changed: [testbed-manager] => (item=neutron) 2025-05-28 17:07:36.312515 | orchestrator | changed: [testbed-manager] => (item=nova) 2025-05-28 17:07:36.312926 | orchestrator | changed: [testbed-manager] => (item=octavia) 2025-05-28 17:07:36.313202 | orchestrator | changed: [testbed-manager] => (item=opensearch) 2025-05-28 17:07:36.313727 | orchestrator | changed: [testbed-manager] => (item=openvswitch) 2025-05-28 17:07:36.314122 | orchestrator | changed: [testbed-manager] => (item=ovn) 2025-05-28 17:07:36.314502 | orchestrator | changed: [testbed-manager] => (item=placement) 2025-05-28 17:07:36.314845 | orchestrator | changed: [testbed-manager] => (item=rabbitmq) 2025-05-28 17:07:36.315229 | orchestrator | changed: [testbed-manager] => (item=redis) 2025-05-28 17:07:36.315719 | orchestrator | changed: [testbed-manager] => (item=skyline) 2025-05-28 17:07:36.316050 | orchestrator | 2025-05-28 17:07:36.316399 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 17:07:36.316885 | orchestrator | 2025-05-28 17:07:36 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-28 17:07:36.316909 | orchestrator | 2025-05-28 17:07:36 | INFO  | Please wait and do not abort execution. 2025-05-28 17:07:36.317517 | orchestrator | testbed-manager : ok=2  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 17:07:36.317795 | orchestrator | 2025-05-28 17:07:36.318144 | orchestrator | 2025-05-28 17:07:36.318443 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 17:07:36.318813 | orchestrator | Wednesday 28 May 2025 17:07:36 +0000 (0:00:52.776) 0:01:59.443 ********* 2025-05-28 17:07:36.319084 | orchestrator | =============================================================================== 2025-05-28 17:07:36.319497 | orchestrator | Pull keystone image ---------------------------------------------------- 66.51s 2025-05-28 17:07:36.319875 | orchestrator | Pull other images ------------------------------------------------------ 52.78s 2025-05-28 17:07:38.631394 | orchestrator | 2025-05-28 17:07:38 | INFO  | Trying to run play wipe-partitions in environment custom 2025-05-28 17:07:38.636404 | orchestrator | Registering Redlock._acquired_script 2025-05-28 17:07:38.636521 | orchestrator | Registering Redlock._extend_script 2025-05-28 17:07:38.636540 | orchestrator | Registering Redlock._release_script 2025-05-28 17:07:38.695330 | orchestrator | 2025-05-28 17:07:38 | INFO  | Task 591ed596-407b-4b85-b03d-598c04c5726f (wipe-partitions) was prepared for execution. 2025-05-28 17:07:38.695407 | orchestrator | 2025-05-28 17:07:38 | INFO  | It takes a moment until task 591ed596-407b-4b85-b03d-598c04c5726f (wipe-partitions) has been started and output is visible here. 2025-05-28 17:07:42.238333 | orchestrator | 2025-05-28 17:07:42.242442 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-05-28 17:07:42.242594 | orchestrator | 2025-05-28 17:07:42.243070 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-05-28 17:07:42.243388 | orchestrator | Wednesday 28 May 2025 17:07:42 +0000 (0:00:00.100) 0:00:00.100 ********* 2025-05-28 17:07:42.767373 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:07:42.767639 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:07:42.767659 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:07:42.767796 | orchestrator | 2025-05-28 17:07:42.767815 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-05-28 17:07:42.770601 | orchestrator | Wednesday 28 May 2025 17:07:42 +0000 (0:00:00.523) 0:00:00.624 ********* 2025-05-28 17:07:42.909719 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:07:42.994289 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:07:42.998109 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:07:42.998523 | orchestrator | 2025-05-28 17:07:42.998985 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-05-28 17:07:42.999660 | orchestrator | Wednesday 28 May 2025 17:07:42 +0000 (0:00:00.233) 0:00:00.858 ********* 2025-05-28 17:07:43.706584 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:07:43.706754 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:07:43.706771 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:07:43.706845 | orchestrator | 2025-05-28 17:07:43.707192 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-05-28 17:07:43.707658 | orchestrator | Wednesday 28 May 2025 17:07:43 +0000 (0:00:00.710) 0:00:01.568 ********* 2025-05-28 17:07:43.876004 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:07:43.975615 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:07:43.975816 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:07:43.975841 | orchestrator | 2025-05-28 17:07:43.975958 | orchestrator | TASK [Check device availability] *********************************************** 2025-05-28 17:07:43.976180 | orchestrator | Wednesday 28 May 2025 17:07:43 +0000 (0:00:00.268) 0:00:01.837 ********* 2025-05-28 17:07:45.239526 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-05-28 17:07:45.239705 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-05-28 17:07:45.239945 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-05-28 17:07:45.240305 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-05-28 17:07:45.240639 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-05-28 17:07:45.241106 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-05-28 17:07:45.241559 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-05-28 17:07:45.241860 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-05-28 17:07:45.242270 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-05-28 17:07:45.242610 | orchestrator | 2025-05-28 17:07:45.242946 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-05-28 17:07:45.243084 | orchestrator | Wednesday 28 May 2025 17:07:45 +0000 (0:00:01.265) 0:00:03.102 ********* 2025-05-28 17:07:46.597741 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-05-28 17:07:46.597863 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-05-28 17:07:46.597880 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-05-28 17:07:46.597948 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-05-28 17:07:46.598172 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-05-28 17:07:46.598608 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-05-28 17:07:46.598983 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-05-28 17:07:46.599091 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-05-28 17:07:46.600646 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-05-28 17:07:46.600810 | orchestrator | 2025-05-28 17:07:46.601074 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-05-28 17:07:46.603625 | orchestrator | Wednesday 28 May 2025 17:07:46 +0000 (0:00:01.355) 0:00:04.458 ********* 2025-05-28 17:07:48.829434 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-05-28 17:07:48.831264 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-05-28 17:07:48.833928 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-05-28 17:07:48.833974 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-05-28 17:07:48.833987 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-05-28 17:07:48.834085 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-05-28 17:07:48.835240 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-05-28 17:07:48.837352 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-05-28 17:07:48.839046 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-05-28 17:07:48.839783 | orchestrator | 2025-05-28 17:07:48.840761 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-05-28 17:07:48.841831 | orchestrator | Wednesday 28 May 2025 17:07:48 +0000 (0:00:02.229) 0:00:06.688 ********* 2025-05-28 17:07:49.449230 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:07:49.452809 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:07:49.454450 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:07:49.456490 | orchestrator | 2025-05-28 17:07:49.457371 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-05-28 17:07:49.458835 | orchestrator | Wednesday 28 May 2025 17:07:49 +0000 (0:00:00.622) 0:00:07.310 ********* 2025-05-28 17:07:50.106390 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:07:50.107707 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:07:50.110214 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:07:50.111781 | orchestrator | 2025-05-28 17:07:50.113610 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 17:07:50.114498 | orchestrator | 2025-05-28 17:07:50 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-28 17:07:50.115621 | orchestrator | 2025-05-28 17:07:50 | INFO  | Please wait and do not abort execution. 2025-05-28 17:07:50.118274 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 17:07:50.119992 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 17:07:50.121787 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 17:07:50.122133 | orchestrator | 2025-05-28 17:07:50.124296 | orchestrator | 2025-05-28 17:07:50.126953 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 17:07:50.127898 | orchestrator | Wednesday 28 May 2025 17:07:50 +0000 (0:00:00.658) 0:00:07.969 ********* 2025-05-28 17:07:50.129352 | orchestrator | =============================================================================== 2025-05-28 17:07:50.131274 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.23s 2025-05-28 17:07:50.132377 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.36s 2025-05-28 17:07:50.133725 | orchestrator | Check device availability ----------------------------------------------- 1.27s 2025-05-28 17:07:50.134869 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.71s 2025-05-28 17:07:50.136753 | orchestrator | Request device events from the kernel ----------------------------------- 0.66s 2025-05-28 17:07:50.137149 | orchestrator | Reload udev rules ------------------------------------------------------- 0.62s 2025-05-28 17:07:50.138108 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.52s 2025-05-28 17:07:50.138832 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.27s 2025-05-28 17:07:50.139824 | orchestrator | Remove all rook related logical devices --------------------------------- 0.23s 2025-05-28 17:07:52.478876 | orchestrator | Registering Redlock._acquired_script 2025-05-28 17:07:52.479772 | orchestrator | Registering Redlock._extend_script 2025-05-28 17:07:52.479803 | orchestrator | Registering Redlock._release_script 2025-05-28 17:07:52.530803 | orchestrator | 2025-05-28 17:07:52 | INFO  | Task a3d3e7b7-36c7-469f-9aa9-b763eda505f2 (facts) was prepared for execution. 2025-05-28 17:07:52.530878 | orchestrator | 2025-05-28 17:07:52 | INFO  | It takes a moment until task a3d3e7b7-36c7-469f-9aa9-b763eda505f2 (facts) has been started and output is visible here. 2025-05-28 17:07:56.421583 | orchestrator | 2025-05-28 17:07:56.423357 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-05-28 17:07:56.423410 | orchestrator | 2025-05-28 17:07:56.423424 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-05-28 17:07:56.423436 | orchestrator | Wednesday 28 May 2025 17:07:56 +0000 (0:00:00.285) 0:00:00.285 ********* 2025-05-28 17:07:57.461451 | orchestrator | ok: [testbed-manager] 2025-05-28 17:07:57.461671 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:07:57.461698 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:07:57.461719 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:07:57.462093 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:07:57.462338 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:07:57.462897 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:07:57.463094 | orchestrator | 2025-05-28 17:07:57.463381 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-05-28 17:07:57.466580 | orchestrator | Wednesday 28 May 2025 17:07:57 +0000 (0:00:01.042) 0:00:01.328 ********* 2025-05-28 17:07:57.606260 | orchestrator | skipping: [testbed-manager] 2025-05-28 17:07:57.684887 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:07:57.751616 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:07:57.824637 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:07:57.894968 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:07:58.515677 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:07:58.518428 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:07:58.520574 | orchestrator | 2025-05-28 17:07:58.524726 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-28 17:07:58.524754 | orchestrator | 2025-05-28 17:07:58.524766 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-28 17:07:58.525083 | orchestrator | Wednesday 28 May 2025 17:07:58 +0000 (0:00:01.055) 0:00:02.384 ********* 2025-05-28 17:08:03.665633 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:08:03.667883 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:08:03.669313 | orchestrator | ok: [testbed-manager] 2025-05-28 17:08:03.671026 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:08:03.672524 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:08:03.674932 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:08:03.676387 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:08:03.678134 | orchestrator | 2025-05-28 17:08:03.680709 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-05-28 17:08:03.682002 | orchestrator | 2025-05-28 17:08:03.683847 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-05-28 17:08:03.685155 | orchestrator | Wednesday 28 May 2025 17:08:03 +0000 (0:00:05.149) 0:00:07.533 ********* 2025-05-28 17:08:03.817595 | orchestrator | skipping: [testbed-manager] 2025-05-28 17:08:03.894957 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:08:03.970495 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:08:04.044823 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:08:04.119205 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:08:04.161438 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:08:04.162748 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:08:04.163474 | orchestrator | 2025-05-28 17:08:04.164077 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 17:08:04.164691 | orchestrator | 2025-05-28 17:08:04 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-28 17:08:04.164863 | orchestrator | 2025-05-28 17:08:04 | INFO  | Please wait and do not abort execution. 2025-05-28 17:08:04.165778 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 17:08:04.166198 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 17:08:04.166993 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 17:08:04.167682 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 17:08:04.169073 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 17:08:04.170306 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 17:08:04.171280 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 17:08:04.172337 | orchestrator | 2025-05-28 17:08:04.176076 | orchestrator | 2025-05-28 17:08:04.176783 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 17:08:04.178891 | orchestrator | Wednesday 28 May 2025 17:08:04 +0000 (0:00:00.498) 0:00:08.032 ********* 2025-05-28 17:08:04.179344 | orchestrator | =============================================================================== 2025-05-28 17:08:04.179822 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.15s 2025-05-28 17:08:04.180333 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.06s 2025-05-28 17:08:04.180857 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.04s 2025-05-28 17:08:04.181538 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.50s 2025-05-28 17:08:06.657523 | orchestrator | 2025-05-28 17:08:06 | INFO  | Task c645e037-ce54-4b28-954a-ab95bc2b5d79 (ceph-configure-lvm-volumes) was prepared for execution. 2025-05-28 17:08:06.657646 | orchestrator | 2025-05-28 17:08:06 | INFO  | It takes a moment until task c645e037-ce54-4b28-954a-ab95bc2b5d79 (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-05-28 17:08:11.532246 | orchestrator | 2025-05-28 17:08:11.532381 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-05-28 17:08:11.532399 | orchestrator | 2025-05-28 17:08:11.532411 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-28 17:08:11.532464 | orchestrator | Wednesday 28 May 2025 17:08:11 +0000 (0:00:00.331) 0:00:00.331 ********* 2025-05-28 17:08:11.733665 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-28 17:08:11.737404 | orchestrator | 2025-05-28 17:08:11.737563 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-28 17:08:11.737655 | orchestrator | Wednesday 28 May 2025 17:08:11 +0000 (0:00:00.207) 0:00:00.538 ********* 2025-05-28 17:08:11.975799 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:08:11.976959 | orchestrator | 2025-05-28 17:08:11.977590 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:08:11.978218 | orchestrator | Wednesday 28 May 2025 17:08:11 +0000 (0:00:00.242) 0:00:00.781 ********* 2025-05-28 17:08:12.328780 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-05-28 17:08:12.334134 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-05-28 17:08:12.334170 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-05-28 17:08:12.334182 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-05-28 17:08:12.334726 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-05-28 17:08:12.335523 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-05-28 17:08:12.339059 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-05-28 17:08:12.339415 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-05-28 17:08:12.339904 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-05-28 17:08:12.341593 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-05-28 17:08:12.342351 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-05-28 17:08:12.343538 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-05-28 17:08:12.344049 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-05-28 17:08:12.344648 | orchestrator | 2025-05-28 17:08:12.345302 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:08:12.347688 | orchestrator | Wednesday 28 May 2025 17:08:12 +0000 (0:00:00.352) 0:00:01.133 ********* 2025-05-28 17:08:12.735826 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:08:12.737397 | orchestrator | 2025-05-28 17:08:12.737416 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:08:12.737771 | orchestrator | Wednesday 28 May 2025 17:08:12 +0000 (0:00:00.407) 0:00:01.541 ********* 2025-05-28 17:08:12.907414 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:08:12.907577 | orchestrator | 2025-05-28 17:08:12.908664 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:08:12.908853 | orchestrator | Wednesday 28 May 2025 17:08:12 +0000 (0:00:00.167) 0:00:01.709 ********* 2025-05-28 17:08:13.104033 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:08:13.104788 | orchestrator | 2025-05-28 17:08:13.106641 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:08:13.106677 | orchestrator | Wednesday 28 May 2025 17:08:13 +0000 (0:00:00.199) 0:00:01.909 ********* 2025-05-28 17:08:13.273031 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:08:13.274342 | orchestrator | 2025-05-28 17:08:13.275635 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:08:13.275673 | orchestrator | Wednesday 28 May 2025 17:08:13 +0000 (0:00:00.167) 0:00:02.076 ********* 2025-05-28 17:08:13.446836 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:08:13.452355 | orchestrator | 2025-05-28 17:08:13.452470 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:08:13.454115 | orchestrator | Wednesday 28 May 2025 17:08:13 +0000 (0:00:00.173) 0:00:02.250 ********* 2025-05-28 17:08:13.636217 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:08:13.636329 | orchestrator | 2025-05-28 17:08:13.636343 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:08:13.640222 | orchestrator | Wednesday 28 May 2025 17:08:13 +0000 (0:00:00.187) 0:00:02.437 ********* 2025-05-28 17:08:13.834196 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:08:13.834520 | orchestrator | 2025-05-28 17:08:13.835397 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:08:13.837346 | orchestrator | Wednesday 28 May 2025 17:08:13 +0000 (0:00:00.201) 0:00:02.638 ********* 2025-05-28 17:08:14.027404 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:08:14.028952 | orchestrator | 2025-05-28 17:08:14.031871 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:08:14.032960 | orchestrator | Wednesday 28 May 2025 17:08:14 +0000 (0:00:00.193) 0:00:02.832 ********* 2025-05-28 17:08:14.391065 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_3e07e7c9-91b0-4ca1-b00e-661089b639c5) 2025-05-28 17:08:14.391298 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_3e07e7c9-91b0-4ca1-b00e-661089b639c5) 2025-05-28 17:08:14.393662 | orchestrator | 2025-05-28 17:08:14.395067 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:08:14.395381 | orchestrator | Wednesday 28 May 2025 17:08:14 +0000 (0:00:00.361) 0:00:03.194 ********* 2025-05-28 17:08:14.749538 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_da6420c4-4562-42e6-8445-8de06d590092) 2025-05-28 17:08:14.749691 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_da6420c4-4562-42e6-8445-8de06d590092) 2025-05-28 17:08:14.751140 | orchestrator | 2025-05-28 17:08:14.751694 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:08:14.752187 | orchestrator | Wednesday 28 May 2025 17:08:14 +0000 (0:00:00.361) 0:00:03.555 ********* 2025-05-28 17:08:15.361328 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_66780fe2-f30a-4cd5-a925-045679329f08) 2025-05-28 17:08:15.361617 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_66780fe2-f30a-4cd5-a925-045679329f08) 2025-05-28 17:08:15.361963 | orchestrator | 2025-05-28 17:08:15.363605 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:08:15.365697 | orchestrator | Wednesday 28 May 2025 17:08:15 +0000 (0:00:00.606) 0:00:04.162 ********* 2025-05-28 17:08:15.888258 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_705788e5-cc1d-4d40-94fd-fb0e2f22a483) 2025-05-28 17:08:15.888492 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_705788e5-cc1d-4d40-94fd-fb0e2f22a483) 2025-05-28 17:08:15.889852 | orchestrator | 2025-05-28 17:08:15.890197 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:08:15.890597 | orchestrator | Wednesday 28 May 2025 17:08:15 +0000 (0:00:00.531) 0:00:04.693 ********* 2025-05-28 17:08:16.439998 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-28 17:08:16.440192 | orchestrator | 2025-05-28 17:08:16.440672 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:08:16.441037 | orchestrator | Wednesday 28 May 2025 17:08:16 +0000 (0:00:00.549) 0:00:05.242 ********* 2025-05-28 17:08:16.789836 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-05-28 17:08:16.789960 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-05-28 17:08:16.792562 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-05-28 17:08:16.793002 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-05-28 17:08:16.793779 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-05-28 17:08:16.794445 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-05-28 17:08:16.796619 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-05-28 17:08:16.796639 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-05-28 17:08:16.796948 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-05-28 17:08:16.797594 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-05-28 17:08:16.797993 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-05-28 17:08:16.798694 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-05-28 17:08:16.798906 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-05-28 17:08:16.800681 | orchestrator | 2025-05-28 17:08:16.800941 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:08:16.801294 | orchestrator | Wednesday 28 May 2025 17:08:16 +0000 (0:00:00.351) 0:00:05.594 ********* 2025-05-28 17:08:16.969725 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:08:16.970530 | orchestrator | 2025-05-28 17:08:16.970644 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:08:16.971915 | orchestrator | Wednesday 28 May 2025 17:08:16 +0000 (0:00:00.179) 0:00:05.773 ********* 2025-05-28 17:08:17.144919 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:08:17.145478 | orchestrator | 2025-05-28 17:08:17.146119 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:08:17.146804 | orchestrator | Wednesday 28 May 2025 17:08:17 +0000 (0:00:00.174) 0:00:05.948 ********* 2025-05-28 17:08:17.340303 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:08:17.344883 | orchestrator | 2025-05-28 17:08:17.344933 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:08:17.344947 | orchestrator | Wednesday 28 May 2025 17:08:17 +0000 (0:00:00.196) 0:00:06.144 ********* 2025-05-28 17:08:17.518559 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:08:17.518785 | orchestrator | 2025-05-28 17:08:17.520263 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:08:17.521247 | orchestrator | Wednesday 28 May 2025 17:08:17 +0000 (0:00:00.178) 0:00:06.323 ********* 2025-05-28 17:08:17.686338 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:08:17.687694 | orchestrator | 2025-05-28 17:08:17.688343 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:08:17.691575 | orchestrator | Wednesday 28 May 2025 17:08:17 +0000 (0:00:00.166) 0:00:06.489 ********* 2025-05-28 17:08:17.904476 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:08:17.904782 | orchestrator | 2025-05-28 17:08:17.906629 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:08:17.907270 | orchestrator | Wednesday 28 May 2025 17:08:17 +0000 (0:00:00.219) 0:00:06.709 ********* 2025-05-28 17:08:18.076582 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:08:18.076712 | orchestrator | 2025-05-28 17:08:18.076729 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:08:18.076742 | orchestrator | Wednesday 28 May 2025 17:08:18 +0000 (0:00:00.165) 0:00:06.875 ********* 2025-05-28 17:08:18.236493 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:08:18.236618 | orchestrator | 2025-05-28 17:08:18.237747 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:08:18.237996 | orchestrator | Wednesday 28 May 2025 17:08:18 +0000 (0:00:00.162) 0:00:07.039 ********* 2025-05-28 17:08:19.132038 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-05-28 17:08:19.132626 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-05-28 17:08:19.133219 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-05-28 17:08:19.133518 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-05-28 17:08:19.133785 | orchestrator | 2025-05-28 17:08:19.134131 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:08:19.134530 | orchestrator | Wednesday 28 May 2025 17:08:19 +0000 (0:00:00.899) 0:00:07.938 ********* 2025-05-28 17:08:19.292197 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:08:19.292321 | orchestrator | 2025-05-28 17:08:19.292336 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:08:19.292350 | orchestrator | Wednesday 28 May 2025 17:08:19 +0000 (0:00:00.156) 0:00:08.095 ********* 2025-05-28 17:08:19.449033 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:08:19.451025 | orchestrator | 2025-05-28 17:08:19.451337 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:08:19.451653 | orchestrator | Wednesday 28 May 2025 17:08:19 +0000 (0:00:00.158) 0:00:08.253 ********* 2025-05-28 17:08:19.613319 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:08:19.616032 | orchestrator | 2025-05-28 17:08:19.616474 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:08:19.616498 | orchestrator | Wednesday 28 May 2025 17:08:19 +0000 (0:00:00.164) 0:00:08.418 ********* 2025-05-28 17:08:19.775893 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:08:19.780982 | orchestrator | 2025-05-28 17:08:19.781073 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-05-28 17:08:19.781178 | orchestrator | Wednesday 28 May 2025 17:08:19 +0000 (0:00:00.161) 0:00:08.580 ********* 2025-05-28 17:08:19.932486 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-05-28 17:08:19.933690 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-05-28 17:08:19.934617 | orchestrator | 2025-05-28 17:08:19.935573 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-05-28 17:08:19.938567 | orchestrator | Wednesday 28 May 2025 17:08:19 +0000 (0:00:00.157) 0:00:08.738 ********* 2025-05-28 17:08:20.058632 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:08:20.058753 | orchestrator | 2025-05-28 17:08:20.058769 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-05-28 17:08:20.059372 | orchestrator | Wednesday 28 May 2025 17:08:20 +0000 (0:00:00.122) 0:00:08.860 ********* 2025-05-28 17:08:20.208729 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:08:20.209622 | orchestrator | 2025-05-28 17:08:20.210860 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-05-28 17:08:20.213917 | orchestrator | Wednesday 28 May 2025 17:08:20 +0000 (0:00:00.152) 0:00:09.013 ********* 2025-05-28 17:08:20.345302 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:08:20.349146 | orchestrator | 2025-05-28 17:08:20.350131 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-05-28 17:08:20.351378 | orchestrator | Wednesday 28 May 2025 17:08:20 +0000 (0:00:00.134) 0:00:09.147 ********* 2025-05-28 17:08:20.483884 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:08:20.484930 | orchestrator | 2025-05-28 17:08:20.486835 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-05-28 17:08:20.487992 | orchestrator | Wednesday 28 May 2025 17:08:20 +0000 (0:00:00.138) 0:00:09.285 ********* 2025-05-28 17:08:20.655051 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b27f73ed-a290-5ab5-82ba-70ebe910dd97'}}) 2025-05-28 17:08:20.655171 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'fbdc558b-af0f-50ef-b610-4a3c4fb87cac'}}) 2025-05-28 17:08:20.656362 | orchestrator | 2025-05-28 17:08:20.657329 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-05-28 17:08:20.658335 | orchestrator | Wednesday 28 May 2025 17:08:20 +0000 (0:00:00.171) 0:00:09.457 ********* 2025-05-28 17:08:20.810176 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b27f73ed-a290-5ab5-82ba-70ebe910dd97'}})  2025-05-28 17:08:20.811848 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'fbdc558b-af0f-50ef-b610-4a3c4fb87cac'}})  2025-05-28 17:08:20.814230 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:08:20.815760 | orchestrator | 2025-05-28 17:08:20.816942 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-05-28 17:08:20.820532 | orchestrator | Wednesday 28 May 2025 17:08:20 +0000 (0:00:00.155) 0:00:09.612 ********* 2025-05-28 17:08:21.207676 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b27f73ed-a290-5ab5-82ba-70ebe910dd97'}})  2025-05-28 17:08:21.208168 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'fbdc558b-af0f-50ef-b610-4a3c4fb87cac'}})  2025-05-28 17:08:21.213813 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:08:21.215388 | orchestrator | 2025-05-28 17:08:21.217686 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-05-28 17:08:21.219036 | orchestrator | Wednesday 28 May 2025 17:08:21 +0000 (0:00:00.394) 0:00:10.007 ********* 2025-05-28 17:08:21.404014 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b27f73ed-a290-5ab5-82ba-70ebe910dd97'}})  2025-05-28 17:08:21.405981 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'fbdc558b-af0f-50ef-b610-4a3c4fb87cac'}})  2025-05-28 17:08:21.407955 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:08:21.408516 | orchestrator | 2025-05-28 17:08:21.408807 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-05-28 17:08:21.409480 | orchestrator | Wednesday 28 May 2025 17:08:21 +0000 (0:00:00.199) 0:00:10.207 ********* 2025-05-28 17:08:21.593289 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:08:21.594652 | orchestrator | 2025-05-28 17:08:21.595030 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-05-28 17:08:21.598448 | orchestrator | Wednesday 28 May 2025 17:08:21 +0000 (0:00:00.190) 0:00:10.398 ********* 2025-05-28 17:08:21.783006 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:08:21.783511 | orchestrator | 2025-05-28 17:08:21.784549 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-05-28 17:08:21.785018 | orchestrator | Wednesday 28 May 2025 17:08:21 +0000 (0:00:00.189) 0:00:10.587 ********* 2025-05-28 17:08:21.989595 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:08:21.991228 | orchestrator | 2025-05-28 17:08:21.994152 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-05-28 17:08:21.995586 | orchestrator | Wednesday 28 May 2025 17:08:21 +0000 (0:00:00.206) 0:00:10.794 ********* 2025-05-28 17:08:22.217747 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:08:22.218650 | orchestrator | 2025-05-28 17:08:22.220257 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-05-28 17:08:22.220284 | orchestrator | Wednesday 28 May 2025 17:08:22 +0000 (0:00:00.228) 0:00:11.022 ********* 2025-05-28 17:08:22.363033 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:08:22.366157 | orchestrator | 2025-05-28 17:08:22.366523 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-05-28 17:08:22.367944 | orchestrator | Wednesday 28 May 2025 17:08:22 +0000 (0:00:00.144) 0:00:11.166 ********* 2025-05-28 17:08:22.559899 | orchestrator | ok: [testbed-node-3] => { 2025-05-28 17:08:22.560750 | orchestrator |  "ceph_osd_devices": { 2025-05-28 17:08:22.561179 | orchestrator |  "sdb": { 2025-05-28 17:08:22.563074 | orchestrator |  "osd_lvm_uuid": "b27f73ed-a290-5ab5-82ba-70ebe910dd97" 2025-05-28 17:08:22.565156 | orchestrator |  }, 2025-05-28 17:08:22.565843 | orchestrator |  "sdc": { 2025-05-28 17:08:22.570163 | orchestrator |  "osd_lvm_uuid": "fbdc558b-af0f-50ef-b610-4a3c4fb87cac" 2025-05-28 17:08:22.570922 | orchestrator |  } 2025-05-28 17:08:22.571184 | orchestrator |  } 2025-05-28 17:08:22.573560 | orchestrator | } 2025-05-28 17:08:22.573594 | orchestrator | 2025-05-28 17:08:22.573971 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-05-28 17:08:22.575249 | orchestrator | Wednesday 28 May 2025 17:08:22 +0000 (0:00:00.196) 0:00:11.363 ********* 2025-05-28 17:08:22.708345 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:08:22.708771 | orchestrator | 2025-05-28 17:08:22.709075 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-05-28 17:08:22.709680 | orchestrator | Wednesday 28 May 2025 17:08:22 +0000 (0:00:00.148) 0:00:11.511 ********* 2025-05-28 17:08:22.878092 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:08:22.882071 | orchestrator | 2025-05-28 17:08:22.882462 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-05-28 17:08:22.882764 | orchestrator | Wednesday 28 May 2025 17:08:22 +0000 (0:00:00.170) 0:00:11.682 ********* 2025-05-28 17:08:23.025443 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:08:23.025611 | orchestrator | 2025-05-28 17:08:23.029248 | orchestrator | TASK [Print configuration data] ************************************************ 2025-05-28 17:08:23.029542 | orchestrator | Wednesday 28 May 2025 17:08:23 +0000 (0:00:00.147) 0:00:11.830 ********* 2025-05-28 17:08:23.250322 | orchestrator | changed: [testbed-node-3] => { 2025-05-28 17:08:23.250530 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-05-28 17:08:23.253878 | orchestrator |  "ceph_osd_devices": { 2025-05-28 17:08:23.254173 | orchestrator |  "sdb": { 2025-05-28 17:08:23.255909 | orchestrator |  "osd_lvm_uuid": "b27f73ed-a290-5ab5-82ba-70ebe910dd97" 2025-05-28 17:08:23.256233 | orchestrator |  }, 2025-05-28 17:08:23.256688 | orchestrator |  "sdc": { 2025-05-28 17:08:23.257874 | orchestrator |  "osd_lvm_uuid": "fbdc558b-af0f-50ef-b610-4a3c4fb87cac" 2025-05-28 17:08:23.258416 | orchestrator |  } 2025-05-28 17:08:23.258526 | orchestrator |  }, 2025-05-28 17:08:23.259020 | orchestrator |  "lvm_volumes": [ 2025-05-28 17:08:23.259385 | orchestrator |  { 2025-05-28 17:08:23.259681 | orchestrator |  "data": "osd-block-b27f73ed-a290-5ab5-82ba-70ebe910dd97", 2025-05-28 17:08:23.261993 | orchestrator |  "data_vg": "ceph-b27f73ed-a290-5ab5-82ba-70ebe910dd97" 2025-05-28 17:08:23.262014 | orchestrator |  }, 2025-05-28 17:08:23.262526 | orchestrator |  { 2025-05-28 17:08:23.262953 | orchestrator |  "data": "osd-block-fbdc558b-af0f-50ef-b610-4a3c4fb87cac", 2025-05-28 17:08:23.263598 | orchestrator |  "data_vg": "ceph-fbdc558b-af0f-50ef-b610-4a3c4fb87cac" 2025-05-28 17:08:23.264066 | orchestrator |  } 2025-05-28 17:08:23.264749 | orchestrator |  ] 2025-05-28 17:08:23.265491 | orchestrator |  } 2025-05-28 17:08:23.265715 | orchestrator | } 2025-05-28 17:08:23.266106 | orchestrator | 2025-05-28 17:08:23.266955 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-05-28 17:08:23.267055 | orchestrator | Wednesday 28 May 2025 17:08:23 +0000 (0:00:00.225) 0:00:12.056 ********* 2025-05-28 17:08:25.426503 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-28 17:08:25.429520 | orchestrator | 2025-05-28 17:08:25.430537 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-05-28 17:08:25.431374 | orchestrator | 2025-05-28 17:08:25.434520 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-28 17:08:25.434544 | orchestrator | Wednesday 28 May 2025 17:08:25 +0000 (0:00:02.173) 0:00:14.229 ********* 2025-05-28 17:08:25.678239 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-05-28 17:08:25.678338 | orchestrator | 2025-05-28 17:08:25.680858 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-28 17:08:25.681462 | orchestrator | Wednesday 28 May 2025 17:08:25 +0000 (0:00:00.251) 0:00:14.481 ********* 2025-05-28 17:08:25.907165 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:08:25.907264 | orchestrator | 2025-05-28 17:08:25.907724 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:08:25.908134 | orchestrator | Wednesday 28 May 2025 17:08:25 +0000 (0:00:00.230) 0:00:14.712 ********* 2025-05-28 17:08:26.273657 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-05-28 17:08:26.274792 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-05-28 17:08:26.276694 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-05-28 17:08:26.278292 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-05-28 17:08:26.279566 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-05-28 17:08:26.280897 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-05-28 17:08:26.282147 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-05-28 17:08:26.283242 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-05-28 17:08:26.283858 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-05-28 17:08:26.285301 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-05-28 17:08:26.286064 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-05-28 17:08:26.287337 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-05-28 17:08:26.290970 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-05-28 17:08:26.292645 | orchestrator | 2025-05-28 17:08:26.293073 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:08:26.294265 | orchestrator | Wednesday 28 May 2025 17:08:26 +0000 (0:00:00.366) 0:00:15.078 ********* 2025-05-28 17:08:26.463205 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:08:26.471345 | orchestrator | 2025-05-28 17:08:26.475599 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:08:26.475625 | orchestrator | Wednesday 28 May 2025 17:08:26 +0000 (0:00:00.187) 0:00:15.265 ********* 2025-05-28 17:08:26.649767 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:08:26.649980 | orchestrator | 2025-05-28 17:08:26.651375 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:08:26.651658 | orchestrator | Wednesday 28 May 2025 17:08:26 +0000 (0:00:00.189) 0:00:15.454 ********* 2025-05-28 17:08:26.838372 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:08:26.840247 | orchestrator | 2025-05-28 17:08:26.845128 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:08:26.846081 | orchestrator | Wednesday 28 May 2025 17:08:26 +0000 (0:00:00.188) 0:00:15.642 ********* 2025-05-28 17:08:27.027906 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:08:27.028148 | orchestrator | 2025-05-28 17:08:27.028696 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:08:27.029724 | orchestrator | Wednesday 28 May 2025 17:08:27 +0000 (0:00:00.190) 0:00:15.832 ********* 2025-05-28 17:08:27.643097 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:08:27.644665 | orchestrator | 2025-05-28 17:08:27.644744 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:08:27.646247 | orchestrator | Wednesday 28 May 2025 17:08:27 +0000 (0:00:00.611) 0:00:16.444 ********* 2025-05-28 17:08:27.827624 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:08:27.828841 | orchestrator | 2025-05-28 17:08:27.829219 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:08:27.831031 | orchestrator | Wednesday 28 May 2025 17:08:27 +0000 (0:00:00.182) 0:00:16.626 ********* 2025-05-28 17:08:28.043275 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:08:28.043440 | orchestrator | 2025-05-28 17:08:28.043456 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:08:28.043473 | orchestrator | Wednesday 28 May 2025 17:08:28 +0000 (0:00:00.217) 0:00:16.844 ********* 2025-05-28 17:08:28.251722 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:08:28.254522 | orchestrator | 2025-05-28 17:08:28.254616 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:08:28.254727 | orchestrator | Wednesday 28 May 2025 17:08:28 +0000 (0:00:00.211) 0:00:17.056 ********* 2025-05-28 17:08:28.663823 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_eb048e8a-8419-4b97-a2c5-865582781a7c) 2025-05-28 17:08:28.665253 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_eb048e8a-8419-4b97-a2c5-865582781a7c) 2025-05-28 17:08:28.665320 | orchestrator | 2025-05-28 17:08:28.665438 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:08:28.666835 | orchestrator | Wednesday 28 May 2025 17:08:28 +0000 (0:00:00.413) 0:00:17.469 ********* 2025-05-28 17:08:29.073488 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_0444fcd6-ace4-41be-a60f-d61a86741ad0) 2025-05-28 17:08:29.073616 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_0444fcd6-ace4-41be-a60f-d61a86741ad0) 2025-05-28 17:08:29.075669 | orchestrator | 2025-05-28 17:08:29.075886 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:08:29.076156 | orchestrator | Wednesday 28 May 2025 17:08:29 +0000 (0:00:00.409) 0:00:17.879 ********* 2025-05-28 17:08:29.484796 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_d5a98c17-e489-4dc0-a000-f021a8d49d4d) 2025-05-28 17:08:29.486129 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_d5a98c17-e489-4dc0-a000-f021a8d49d4d) 2025-05-28 17:08:29.487511 | orchestrator | 2025-05-28 17:08:29.489055 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:08:29.491006 | orchestrator | Wednesday 28 May 2025 17:08:29 +0000 (0:00:00.407) 0:00:18.286 ********* 2025-05-28 17:08:29.889652 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_c3ba669b-02ce-4ac9-8d34-f5b1bbc1f6b4) 2025-05-28 17:08:29.890333 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_c3ba669b-02ce-4ac9-8d34-f5b1bbc1f6b4) 2025-05-28 17:08:29.891524 | orchestrator | 2025-05-28 17:08:29.892783 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:08:29.893420 | orchestrator | Wednesday 28 May 2025 17:08:29 +0000 (0:00:00.406) 0:00:18.693 ********* 2025-05-28 17:08:30.217675 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-28 17:08:30.217822 | orchestrator | 2025-05-28 17:08:30.218997 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:08:30.220443 | orchestrator | Wednesday 28 May 2025 17:08:30 +0000 (0:00:00.328) 0:00:19.021 ********* 2025-05-28 17:08:30.592575 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-05-28 17:08:30.593576 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-05-28 17:08:30.594831 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-05-28 17:08:30.595555 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-05-28 17:08:30.596430 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-05-28 17:08:30.597585 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-05-28 17:08:30.600105 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-05-28 17:08:30.600818 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-05-28 17:08:30.601111 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-05-28 17:08:30.601715 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-05-28 17:08:30.602146 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-05-28 17:08:30.602695 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-05-28 17:08:30.603552 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-05-28 17:08:30.603806 | orchestrator | 2025-05-28 17:08:30.604666 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:08:30.605129 | orchestrator | Wednesday 28 May 2025 17:08:30 +0000 (0:00:00.374) 0:00:19.396 ********* 2025-05-28 17:08:30.800749 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:08:30.801956 | orchestrator | 2025-05-28 17:08:30.803262 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:08:30.804184 | orchestrator | Wednesday 28 May 2025 17:08:30 +0000 (0:00:00.207) 0:00:19.604 ********* 2025-05-28 17:08:31.356195 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:08:31.357102 | orchestrator | 2025-05-28 17:08:31.358717 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:08:31.361475 | orchestrator | Wednesday 28 May 2025 17:08:31 +0000 (0:00:00.557) 0:00:20.161 ********* 2025-05-28 17:08:31.537685 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:08:31.538907 | orchestrator | 2025-05-28 17:08:31.539111 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:08:31.539555 | orchestrator | Wednesday 28 May 2025 17:08:31 +0000 (0:00:00.180) 0:00:20.342 ********* 2025-05-28 17:08:31.700256 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:08:31.701211 | orchestrator | 2025-05-28 17:08:31.702473 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:08:31.703158 | orchestrator | Wednesday 28 May 2025 17:08:31 +0000 (0:00:00.163) 0:00:20.505 ********* 2025-05-28 17:08:31.857758 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:08:31.857959 | orchestrator | 2025-05-28 17:08:31.859133 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:08:31.859349 | orchestrator | Wednesday 28 May 2025 17:08:31 +0000 (0:00:00.158) 0:00:20.663 ********* 2025-05-28 17:08:32.081532 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:08:32.082302 | orchestrator | 2025-05-28 17:08:32.083823 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:08:32.084208 | orchestrator | Wednesday 28 May 2025 17:08:32 +0000 (0:00:00.218) 0:00:20.881 ********* 2025-05-28 17:08:32.309061 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:08:32.309183 | orchestrator | 2025-05-28 17:08:32.310327 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:08:32.310418 | orchestrator | Wednesday 28 May 2025 17:08:32 +0000 (0:00:00.230) 0:00:21.112 ********* 2025-05-28 17:08:32.485274 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:08:32.486399 | orchestrator | 2025-05-28 17:08:32.488073 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:08:32.490772 | orchestrator | Wednesday 28 May 2025 17:08:32 +0000 (0:00:00.179) 0:00:21.291 ********* 2025-05-28 17:08:33.047256 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-05-28 17:08:33.048268 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-05-28 17:08:33.049033 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-05-28 17:08:33.049990 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-05-28 17:08:33.050960 | orchestrator | 2025-05-28 17:08:33.051564 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:08:33.052283 | orchestrator | Wednesday 28 May 2025 17:08:33 +0000 (0:00:00.560) 0:00:21.852 ********* 2025-05-28 17:08:33.240021 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:08:33.240907 | orchestrator | 2025-05-28 17:08:33.241732 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:08:33.244415 | orchestrator | Wednesday 28 May 2025 17:08:33 +0000 (0:00:00.192) 0:00:22.045 ********* 2025-05-28 17:08:33.424768 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:08:33.430681 | orchestrator | 2025-05-28 17:08:33.431142 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:08:33.431660 | orchestrator | Wednesday 28 May 2025 17:08:33 +0000 (0:00:00.184) 0:00:22.229 ********* 2025-05-28 17:08:33.615285 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:08:33.615442 | orchestrator | 2025-05-28 17:08:33.615459 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:08:33.615500 | orchestrator | Wednesday 28 May 2025 17:08:33 +0000 (0:00:00.188) 0:00:22.418 ********* 2025-05-28 17:08:33.799841 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:08:33.799960 | orchestrator | 2025-05-28 17:08:33.800267 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-05-28 17:08:33.800644 | orchestrator | Wednesday 28 May 2025 17:08:33 +0000 (0:00:00.187) 0:00:22.605 ********* 2025-05-28 17:08:34.065264 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-05-28 17:08:34.069080 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-05-28 17:08:34.069662 | orchestrator | 2025-05-28 17:08:34.070395 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-05-28 17:08:34.070926 | orchestrator | Wednesday 28 May 2025 17:08:34 +0000 (0:00:00.264) 0:00:22.870 ********* 2025-05-28 17:08:34.185935 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:08:34.186765 | orchestrator | 2025-05-28 17:08:34.188660 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-05-28 17:08:34.189349 | orchestrator | Wednesday 28 May 2025 17:08:34 +0000 (0:00:00.118) 0:00:22.988 ********* 2025-05-28 17:08:34.326756 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:08:34.331903 | orchestrator | 2025-05-28 17:08:34.333927 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-05-28 17:08:34.335084 | orchestrator | Wednesday 28 May 2025 17:08:34 +0000 (0:00:00.143) 0:00:23.132 ********* 2025-05-28 17:08:34.456886 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:08:34.460608 | orchestrator | 2025-05-28 17:08:34.460769 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-05-28 17:08:34.461166 | orchestrator | Wednesday 28 May 2025 17:08:34 +0000 (0:00:00.129) 0:00:23.261 ********* 2025-05-28 17:08:34.571632 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:08:34.571862 | orchestrator | 2025-05-28 17:08:34.574208 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-05-28 17:08:34.574728 | orchestrator | Wednesday 28 May 2025 17:08:34 +0000 (0:00:00.112) 0:00:23.373 ********* 2025-05-28 17:08:34.715656 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b5b3f734-7a3a-56eb-b9e1-00e08c7f7e25'}}) 2025-05-28 17:08:34.715765 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7e811d1b-ccc9-571e-beba-983efbae239d'}}) 2025-05-28 17:08:34.715779 | orchestrator | 2025-05-28 17:08:34.716711 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-05-28 17:08:34.719873 | orchestrator | Wednesday 28 May 2025 17:08:34 +0000 (0:00:00.144) 0:00:23.517 ********* 2025-05-28 17:08:34.835414 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b5b3f734-7a3a-56eb-b9e1-00e08c7f7e25'}})  2025-05-28 17:08:34.836896 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7e811d1b-ccc9-571e-beba-983efbae239d'}})  2025-05-28 17:08:34.838117 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:08:34.839383 | orchestrator | 2025-05-28 17:08:34.841764 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-05-28 17:08:34.842298 | orchestrator | Wednesday 28 May 2025 17:08:34 +0000 (0:00:00.123) 0:00:23.641 ********* 2025-05-28 17:08:34.941334 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b5b3f734-7a3a-56eb-b9e1-00e08c7f7e25'}})  2025-05-28 17:08:34.945523 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7e811d1b-ccc9-571e-beba-983efbae239d'}})  2025-05-28 17:08:34.947488 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:08:34.948468 | orchestrator | 2025-05-28 17:08:34.949385 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-05-28 17:08:34.950296 | orchestrator | Wednesday 28 May 2025 17:08:34 +0000 (0:00:00.104) 0:00:23.745 ********* 2025-05-28 17:08:35.066503 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b5b3f734-7a3a-56eb-b9e1-00e08c7f7e25'}})  2025-05-28 17:08:35.066920 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7e811d1b-ccc9-571e-beba-983efbae239d'}})  2025-05-28 17:08:35.068483 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:08:35.070222 | orchestrator | 2025-05-28 17:08:35.071209 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-05-28 17:08:35.073731 | orchestrator | Wednesday 28 May 2025 17:08:35 +0000 (0:00:00.125) 0:00:23.870 ********* 2025-05-28 17:08:35.189917 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:08:35.190084 | orchestrator | 2025-05-28 17:08:35.191852 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-05-28 17:08:35.192139 | orchestrator | Wednesday 28 May 2025 17:08:35 +0000 (0:00:00.122) 0:00:23.993 ********* 2025-05-28 17:08:35.320573 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:08:35.321682 | orchestrator | 2025-05-28 17:08:35.323064 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-05-28 17:08:35.324044 | orchestrator | Wednesday 28 May 2025 17:08:35 +0000 (0:00:00.132) 0:00:24.125 ********* 2025-05-28 17:08:35.429870 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:08:35.430696 | orchestrator | 2025-05-28 17:08:35.430710 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-05-28 17:08:35.430718 | orchestrator | Wednesday 28 May 2025 17:08:35 +0000 (0:00:00.109) 0:00:24.235 ********* 2025-05-28 17:08:35.665747 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:08:35.666982 | orchestrator | 2025-05-28 17:08:35.669240 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-05-28 17:08:35.669654 | orchestrator | Wednesday 28 May 2025 17:08:35 +0000 (0:00:00.235) 0:00:24.471 ********* 2025-05-28 17:08:35.789140 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:08:35.790689 | orchestrator | 2025-05-28 17:08:35.791697 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-05-28 17:08:35.793284 | orchestrator | Wednesday 28 May 2025 17:08:35 +0000 (0:00:00.123) 0:00:24.594 ********* 2025-05-28 17:08:35.914708 | orchestrator | ok: [testbed-node-4] => { 2025-05-28 17:08:35.914941 | orchestrator |  "ceph_osd_devices": { 2025-05-28 17:08:35.918637 | orchestrator |  "sdb": { 2025-05-28 17:08:35.919233 | orchestrator |  "osd_lvm_uuid": "b5b3f734-7a3a-56eb-b9e1-00e08c7f7e25" 2025-05-28 17:08:35.919792 | orchestrator |  }, 2025-05-28 17:08:35.921994 | orchestrator |  "sdc": { 2025-05-28 17:08:35.923172 | orchestrator |  "osd_lvm_uuid": "7e811d1b-ccc9-571e-beba-983efbae239d" 2025-05-28 17:08:35.924486 | orchestrator |  } 2025-05-28 17:08:35.924873 | orchestrator |  } 2025-05-28 17:08:35.925406 | orchestrator | } 2025-05-28 17:08:35.925520 | orchestrator | 2025-05-28 17:08:35.926056 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-05-28 17:08:35.926522 | orchestrator | Wednesday 28 May 2025 17:08:35 +0000 (0:00:00.124) 0:00:24.719 ********* 2025-05-28 17:08:36.032039 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:08:36.033124 | orchestrator | 2025-05-28 17:08:36.034003 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-05-28 17:08:36.035826 | orchestrator | Wednesday 28 May 2025 17:08:36 +0000 (0:00:00.117) 0:00:24.836 ********* 2025-05-28 17:08:36.144184 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:08:36.147277 | orchestrator | 2025-05-28 17:08:36.147305 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-05-28 17:08:36.147347 | orchestrator | Wednesday 28 May 2025 17:08:36 +0000 (0:00:00.112) 0:00:24.949 ********* 2025-05-28 17:08:36.257640 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:08:36.257921 | orchestrator | 2025-05-28 17:08:36.259665 | orchestrator | TASK [Print configuration data] ************************************************ 2025-05-28 17:08:36.260118 | orchestrator | Wednesday 28 May 2025 17:08:36 +0000 (0:00:00.112) 0:00:25.062 ********* 2025-05-28 17:08:36.430121 | orchestrator | changed: [testbed-node-4] => { 2025-05-28 17:08:36.432027 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-05-28 17:08:36.433338 | orchestrator |  "ceph_osd_devices": { 2025-05-28 17:08:36.434251 | orchestrator |  "sdb": { 2025-05-28 17:08:36.434576 | orchestrator |  "osd_lvm_uuid": "b5b3f734-7a3a-56eb-b9e1-00e08c7f7e25" 2025-05-28 17:08:36.434969 | orchestrator |  }, 2025-05-28 17:08:36.435996 | orchestrator |  "sdc": { 2025-05-28 17:08:36.436281 | orchestrator |  "osd_lvm_uuid": "7e811d1b-ccc9-571e-beba-983efbae239d" 2025-05-28 17:08:36.436529 | orchestrator |  } 2025-05-28 17:08:36.438205 | orchestrator |  }, 2025-05-28 17:08:36.438497 | orchestrator |  "lvm_volumes": [ 2025-05-28 17:08:36.438611 | orchestrator |  { 2025-05-28 17:08:36.439382 | orchestrator |  "data": "osd-block-b5b3f734-7a3a-56eb-b9e1-00e08c7f7e25", 2025-05-28 17:08:36.439700 | orchestrator |  "data_vg": "ceph-b5b3f734-7a3a-56eb-b9e1-00e08c7f7e25" 2025-05-28 17:08:36.440041 | orchestrator |  }, 2025-05-28 17:08:36.441722 | orchestrator |  { 2025-05-28 17:08:36.441907 | orchestrator |  "data": "osd-block-7e811d1b-ccc9-571e-beba-983efbae239d", 2025-05-28 17:08:36.442346 | orchestrator |  "data_vg": "ceph-7e811d1b-ccc9-571e-beba-983efbae239d" 2025-05-28 17:08:36.442649 | orchestrator |  } 2025-05-28 17:08:36.442987 | orchestrator |  ] 2025-05-28 17:08:36.443352 | orchestrator |  } 2025-05-28 17:08:36.443648 | orchestrator | } 2025-05-28 17:08:36.443906 | orchestrator | 2025-05-28 17:08:36.444280 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-05-28 17:08:36.444676 | orchestrator | Wednesday 28 May 2025 17:08:36 +0000 (0:00:00.172) 0:00:25.234 ********* 2025-05-28 17:08:37.339546 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-05-28 17:08:37.339717 | orchestrator | 2025-05-28 17:08:37.339802 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-05-28 17:08:37.340673 | orchestrator | 2025-05-28 17:08:37.341354 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-28 17:08:37.342273 | orchestrator | Wednesday 28 May 2025 17:08:37 +0000 (0:00:00.906) 0:00:26.141 ********* 2025-05-28 17:08:37.687493 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-05-28 17:08:37.687956 | orchestrator | 2025-05-28 17:08:37.693484 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-28 17:08:37.693639 | orchestrator | Wednesday 28 May 2025 17:08:37 +0000 (0:00:00.351) 0:00:26.492 ********* 2025-05-28 17:08:38.250316 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:08:38.251464 | orchestrator | 2025-05-28 17:08:38.252021 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:08:38.252844 | orchestrator | Wednesday 28 May 2025 17:08:38 +0000 (0:00:00.562) 0:00:27.055 ********* 2025-05-28 17:08:38.619611 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-05-28 17:08:38.623068 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-05-28 17:08:38.626421 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-05-28 17:08:38.630822 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-05-28 17:08:38.631066 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-05-28 17:08:38.633880 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-05-28 17:08:38.634441 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-05-28 17:08:38.635062 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-05-28 17:08:38.635425 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-05-28 17:08:38.635983 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-05-28 17:08:38.636518 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-05-28 17:08:38.637053 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-05-28 17:08:38.637548 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-05-28 17:08:38.638094 | orchestrator | 2025-05-28 17:08:38.638681 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:08:38.638910 | orchestrator | Wednesday 28 May 2025 17:08:38 +0000 (0:00:00.366) 0:00:27.422 ********* 2025-05-28 17:08:38.825853 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:08:38.827329 | orchestrator | 2025-05-28 17:08:38.831567 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:08:38.832837 | orchestrator | Wednesday 28 May 2025 17:08:38 +0000 (0:00:00.206) 0:00:27.628 ********* 2025-05-28 17:08:39.046139 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:08:39.047485 | orchestrator | 2025-05-28 17:08:39.048033 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:08:39.048624 | orchestrator | Wednesday 28 May 2025 17:08:39 +0000 (0:00:00.221) 0:00:27.850 ********* 2025-05-28 17:08:39.251152 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:08:39.254549 | orchestrator | 2025-05-28 17:08:39.257293 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:08:39.259057 | orchestrator | Wednesday 28 May 2025 17:08:39 +0000 (0:00:00.203) 0:00:28.053 ********* 2025-05-28 17:08:39.471626 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:08:39.472223 | orchestrator | 2025-05-28 17:08:39.472251 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:08:39.472704 | orchestrator | Wednesday 28 May 2025 17:08:39 +0000 (0:00:00.222) 0:00:28.276 ********* 2025-05-28 17:08:39.673262 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:08:39.675621 | orchestrator | 2025-05-28 17:08:39.675654 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:08:39.679293 | orchestrator | Wednesday 28 May 2025 17:08:39 +0000 (0:00:00.201) 0:00:28.477 ********* 2025-05-28 17:08:39.865055 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:08:39.865671 | orchestrator | 2025-05-28 17:08:39.866958 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:08:39.870486 | orchestrator | Wednesday 28 May 2025 17:08:39 +0000 (0:00:00.192) 0:00:28.669 ********* 2025-05-28 17:08:40.048878 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:08:40.050653 | orchestrator | 2025-05-28 17:08:40.050705 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:08:40.051496 | orchestrator | Wednesday 28 May 2025 17:08:40 +0000 (0:00:00.181) 0:00:28.851 ********* 2025-05-28 17:08:40.239121 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:08:40.239585 | orchestrator | 2025-05-28 17:08:40.240487 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:08:40.241431 | orchestrator | Wednesday 28 May 2025 17:08:40 +0000 (0:00:00.191) 0:00:29.043 ********* 2025-05-28 17:08:40.932708 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_536d5e59-8868-442a-b439-21fdbfcfc02f) 2025-05-28 17:08:40.934228 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_536d5e59-8868-442a-b439-21fdbfcfc02f) 2025-05-28 17:08:40.937369 | orchestrator | 2025-05-28 17:08:40.938442 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:08:40.939064 | orchestrator | Wednesday 28 May 2025 17:08:40 +0000 (0:00:00.690) 0:00:29.734 ********* 2025-05-28 17:08:41.829554 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_1369a208-db5b-4ff3-8df7-c2f8ed8178e8) 2025-05-28 17:08:41.829954 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_1369a208-db5b-4ff3-8df7-c2f8ed8178e8) 2025-05-28 17:08:41.831494 | orchestrator | 2025-05-28 17:08:41.833843 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:08:41.834565 | orchestrator | Wednesday 28 May 2025 17:08:41 +0000 (0:00:00.897) 0:00:30.632 ********* 2025-05-28 17:08:42.267128 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_3045bd6c-b8ff-4958-af32-f9dea72800f3) 2025-05-28 17:08:42.267999 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_3045bd6c-b8ff-4958-af32-f9dea72800f3) 2025-05-28 17:08:42.269243 | orchestrator | 2025-05-28 17:08:42.269864 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:08:42.271459 | orchestrator | Wednesday 28 May 2025 17:08:42 +0000 (0:00:00.438) 0:00:31.070 ********* 2025-05-28 17:08:42.723777 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_80beb2a7-6ee1-4917-8c3d-de783739f119) 2025-05-28 17:08:42.725428 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_80beb2a7-6ee1-4917-8c3d-de783739f119) 2025-05-28 17:08:42.726282 | orchestrator | 2025-05-28 17:08:42.728906 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:08:42.729098 | orchestrator | Wednesday 28 May 2025 17:08:42 +0000 (0:00:00.456) 0:00:31.527 ********* 2025-05-28 17:08:43.062236 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-28 17:08:43.063727 | orchestrator | 2025-05-28 17:08:43.066630 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:08:43.066863 | orchestrator | Wednesday 28 May 2025 17:08:43 +0000 (0:00:00.335) 0:00:31.863 ********* 2025-05-28 17:08:43.492582 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-05-28 17:08:43.495780 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-05-28 17:08:43.497212 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-05-28 17:08:43.498094 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-05-28 17:08:43.498816 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-05-28 17:08:43.500393 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-05-28 17:08:43.500731 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-05-28 17:08:43.501414 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-05-28 17:08:43.501631 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-05-28 17:08:43.502258 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-05-28 17:08:43.502754 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-05-28 17:08:43.502880 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-05-28 17:08:43.503333 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-05-28 17:08:43.503797 | orchestrator | 2025-05-28 17:08:43.504123 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:08:43.504541 | orchestrator | Wednesday 28 May 2025 17:08:43 +0000 (0:00:00.433) 0:00:32.296 ********* 2025-05-28 17:08:43.702920 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:08:43.703475 | orchestrator | 2025-05-28 17:08:43.704462 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:08:43.705146 | orchestrator | Wednesday 28 May 2025 17:08:43 +0000 (0:00:00.205) 0:00:32.502 ********* 2025-05-28 17:08:43.930442 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:08:43.931603 | orchestrator | 2025-05-28 17:08:43.933405 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:08:43.934657 | orchestrator | Wednesday 28 May 2025 17:08:43 +0000 (0:00:00.232) 0:00:32.734 ********* 2025-05-28 17:08:44.179001 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:08:44.179989 | orchestrator | 2025-05-28 17:08:44.181059 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:08:44.182069 | orchestrator | Wednesday 28 May 2025 17:08:44 +0000 (0:00:00.247) 0:00:32.982 ********* 2025-05-28 17:08:44.386571 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:08:44.386695 | orchestrator | 2025-05-28 17:08:44.389039 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:08:44.389708 | orchestrator | Wednesday 28 May 2025 17:08:44 +0000 (0:00:00.203) 0:00:33.186 ********* 2025-05-28 17:08:44.559175 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:08:44.559566 | orchestrator | 2025-05-28 17:08:44.560668 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:08:44.561946 | orchestrator | Wednesday 28 May 2025 17:08:44 +0000 (0:00:00.176) 0:00:33.362 ********* 2025-05-28 17:08:45.241682 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:08:45.241867 | orchestrator | 2025-05-28 17:08:45.243386 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:08:45.244606 | orchestrator | Wednesday 28 May 2025 17:08:45 +0000 (0:00:00.678) 0:00:34.041 ********* 2025-05-28 17:08:45.454286 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:08:45.454451 | orchestrator | 2025-05-28 17:08:45.455329 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:08:45.456086 | orchestrator | Wednesday 28 May 2025 17:08:45 +0000 (0:00:00.214) 0:00:34.256 ********* 2025-05-28 17:08:45.656074 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:08:45.657111 | orchestrator | 2025-05-28 17:08:45.658261 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:08:45.659209 | orchestrator | Wednesday 28 May 2025 17:08:45 +0000 (0:00:00.204) 0:00:34.461 ********* 2025-05-28 17:08:46.315299 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-05-28 17:08:46.315637 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-05-28 17:08:46.315666 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-05-28 17:08:46.315950 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-05-28 17:08:46.316599 | orchestrator | 2025-05-28 17:08:46.317140 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:08:46.317955 | orchestrator | Wednesday 28 May 2025 17:08:46 +0000 (0:00:00.654) 0:00:35.115 ********* 2025-05-28 17:08:46.509956 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:08:46.512950 | orchestrator | 2025-05-28 17:08:46.512990 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:08:46.514117 | orchestrator | Wednesday 28 May 2025 17:08:46 +0000 (0:00:00.198) 0:00:35.313 ********* 2025-05-28 17:08:46.704358 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:08:46.704890 | orchestrator | 2025-05-28 17:08:46.706167 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:08:46.706195 | orchestrator | Wednesday 28 May 2025 17:08:46 +0000 (0:00:00.193) 0:00:35.507 ********* 2025-05-28 17:08:46.889568 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:08:46.889710 | orchestrator | 2025-05-28 17:08:46.890389 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:08:46.891043 | orchestrator | Wednesday 28 May 2025 17:08:46 +0000 (0:00:00.185) 0:00:35.692 ********* 2025-05-28 17:08:47.079417 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:08:47.080909 | orchestrator | 2025-05-28 17:08:47.083275 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-05-28 17:08:47.084503 | orchestrator | Wednesday 28 May 2025 17:08:47 +0000 (0:00:00.190) 0:00:35.883 ********* 2025-05-28 17:08:47.255129 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-05-28 17:08:47.255800 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-05-28 17:08:47.255838 | orchestrator | 2025-05-28 17:08:47.256142 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-05-28 17:08:47.258649 | orchestrator | Wednesday 28 May 2025 17:08:47 +0000 (0:00:00.177) 0:00:36.060 ********* 2025-05-28 17:08:47.392432 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:08:47.394101 | orchestrator | 2025-05-28 17:08:47.394971 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-05-28 17:08:47.397673 | orchestrator | Wednesday 28 May 2025 17:08:47 +0000 (0:00:00.135) 0:00:36.196 ********* 2025-05-28 17:08:47.516269 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:08:47.518242 | orchestrator | 2025-05-28 17:08:47.521766 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-05-28 17:08:47.521802 | orchestrator | Wednesday 28 May 2025 17:08:47 +0000 (0:00:00.123) 0:00:36.320 ********* 2025-05-28 17:08:47.658853 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:08:47.664093 | orchestrator | 2025-05-28 17:08:47.665996 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-05-28 17:08:47.666133 | orchestrator | Wednesday 28 May 2025 17:08:47 +0000 (0:00:00.140) 0:00:36.461 ********* 2025-05-28 17:08:48.009736 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:08:48.010972 | orchestrator | 2025-05-28 17:08:48.011784 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-05-28 17:08:48.012690 | orchestrator | Wednesday 28 May 2025 17:08:48 +0000 (0:00:00.350) 0:00:36.811 ********* 2025-05-28 17:08:48.192883 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '91f15584-1a8a-582b-a00a-c533bea87f37'}}) 2025-05-28 17:08:48.193007 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd85522ca-9ab4-5810-aefe-18d74b0f7dbe'}}) 2025-05-28 17:08:48.193195 | orchestrator | 2025-05-28 17:08:48.194139 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-05-28 17:08:48.194283 | orchestrator | Wednesday 28 May 2025 17:08:48 +0000 (0:00:00.186) 0:00:36.998 ********* 2025-05-28 17:08:48.358784 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '91f15584-1a8a-582b-a00a-c533bea87f37'}})  2025-05-28 17:08:48.361872 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd85522ca-9ab4-5810-aefe-18d74b0f7dbe'}})  2025-05-28 17:08:48.361955 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:08:48.362505 | orchestrator | 2025-05-28 17:08:48.365953 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-05-28 17:08:48.367807 | orchestrator | Wednesday 28 May 2025 17:08:48 +0000 (0:00:00.159) 0:00:37.158 ********* 2025-05-28 17:08:48.515885 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '91f15584-1a8a-582b-a00a-c533bea87f37'}})  2025-05-28 17:08:48.516943 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd85522ca-9ab4-5810-aefe-18d74b0f7dbe'}})  2025-05-28 17:08:48.518238 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:08:48.521978 | orchestrator | 2025-05-28 17:08:48.522079 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-05-28 17:08:48.522897 | orchestrator | Wednesday 28 May 2025 17:08:48 +0000 (0:00:00.160) 0:00:37.318 ********* 2025-05-28 17:08:48.680617 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '91f15584-1a8a-582b-a00a-c533bea87f37'}})  2025-05-28 17:08:48.682322 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd85522ca-9ab4-5810-aefe-18d74b0f7dbe'}})  2025-05-28 17:08:48.683964 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:08:48.687981 | orchestrator | 2025-05-28 17:08:48.688483 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-05-28 17:08:48.688848 | orchestrator | Wednesday 28 May 2025 17:08:48 +0000 (0:00:00.161) 0:00:37.480 ********* 2025-05-28 17:08:48.833111 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:08:48.833750 | orchestrator | 2025-05-28 17:08:48.835741 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-05-28 17:08:48.837185 | orchestrator | Wednesday 28 May 2025 17:08:48 +0000 (0:00:00.157) 0:00:37.637 ********* 2025-05-28 17:08:48.977445 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:08:48.978246 | orchestrator | 2025-05-28 17:08:48.979550 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-05-28 17:08:48.981633 | orchestrator | Wednesday 28 May 2025 17:08:48 +0000 (0:00:00.143) 0:00:37.781 ********* 2025-05-28 17:08:49.119170 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:08:49.122526 | orchestrator | 2025-05-28 17:08:49.122832 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-05-28 17:08:49.123513 | orchestrator | Wednesday 28 May 2025 17:08:49 +0000 (0:00:00.140) 0:00:37.921 ********* 2025-05-28 17:08:49.263293 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:08:49.263671 | orchestrator | 2025-05-28 17:08:49.264720 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-05-28 17:08:49.265735 | orchestrator | Wednesday 28 May 2025 17:08:49 +0000 (0:00:00.146) 0:00:38.067 ********* 2025-05-28 17:08:49.389664 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:08:49.390956 | orchestrator | 2025-05-28 17:08:49.394884 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-05-28 17:08:49.394906 | orchestrator | Wednesday 28 May 2025 17:08:49 +0000 (0:00:00.125) 0:00:38.193 ********* 2025-05-28 17:08:49.535114 | orchestrator | ok: [testbed-node-5] => { 2025-05-28 17:08:49.535725 | orchestrator |  "ceph_osd_devices": { 2025-05-28 17:08:49.535818 | orchestrator |  "sdb": { 2025-05-28 17:08:49.536965 | orchestrator |  "osd_lvm_uuid": "91f15584-1a8a-582b-a00a-c533bea87f37" 2025-05-28 17:08:49.537454 | orchestrator |  }, 2025-05-28 17:08:49.537807 | orchestrator |  "sdc": { 2025-05-28 17:08:49.538675 | orchestrator |  "osd_lvm_uuid": "d85522ca-9ab4-5810-aefe-18d74b0f7dbe" 2025-05-28 17:08:49.541075 | orchestrator |  } 2025-05-28 17:08:49.541396 | orchestrator |  } 2025-05-28 17:08:49.541578 | orchestrator | } 2025-05-28 17:08:49.541807 | orchestrator | 2025-05-28 17:08:49.543854 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-05-28 17:08:49.544127 | orchestrator | Wednesday 28 May 2025 17:08:49 +0000 (0:00:00.145) 0:00:38.338 ********* 2025-05-28 17:08:49.664740 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:08:49.665784 | orchestrator | 2025-05-28 17:08:49.669800 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-05-28 17:08:49.672445 | orchestrator | Wednesday 28 May 2025 17:08:49 +0000 (0:00:00.129) 0:00:38.468 ********* 2025-05-28 17:08:50.008613 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:08:50.009466 | orchestrator | 2025-05-28 17:08:50.013899 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-05-28 17:08:50.014163 | orchestrator | Wednesday 28 May 2025 17:08:50 +0000 (0:00:00.343) 0:00:38.812 ********* 2025-05-28 17:08:50.136358 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:08:50.136585 | orchestrator | 2025-05-28 17:08:50.136983 | orchestrator | TASK [Print configuration data] ************************************************ 2025-05-28 17:08:50.137688 | orchestrator | Wednesday 28 May 2025 17:08:50 +0000 (0:00:00.126) 0:00:38.939 ********* 2025-05-28 17:08:50.350800 | orchestrator | changed: [testbed-node-5] => { 2025-05-28 17:08:50.352015 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-05-28 17:08:50.356460 | orchestrator |  "ceph_osd_devices": { 2025-05-28 17:08:50.357795 | orchestrator |  "sdb": { 2025-05-28 17:08:50.359731 | orchestrator |  "osd_lvm_uuid": "91f15584-1a8a-582b-a00a-c533bea87f37" 2025-05-28 17:08:50.361412 | orchestrator |  }, 2025-05-28 17:08:50.363149 | orchestrator |  "sdc": { 2025-05-28 17:08:50.365001 | orchestrator |  "osd_lvm_uuid": "d85522ca-9ab4-5810-aefe-18d74b0f7dbe" 2025-05-28 17:08:50.365768 | orchestrator |  } 2025-05-28 17:08:50.366502 | orchestrator |  }, 2025-05-28 17:08:50.367423 | orchestrator |  "lvm_volumes": [ 2025-05-28 17:08:50.367945 | orchestrator |  { 2025-05-28 17:08:50.369227 | orchestrator |  "data": "osd-block-91f15584-1a8a-582b-a00a-c533bea87f37", 2025-05-28 17:08:50.371010 | orchestrator |  "data_vg": "ceph-91f15584-1a8a-582b-a00a-c533bea87f37" 2025-05-28 17:08:50.372597 | orchestrator |  }, 2025-05-28 17:08:50.374292 | orchestrator |  { 2025-05-28 17:08:50.374652 | orchestrator |  "data": "osd-block-d85522ca-9ab4-5810-aefe-18d74b0f7dbe", 2025-05-28 17:08:50.375493 | orchestrator |  "data_vg": "ceph-d85522ca-9ab4-5810-aefe-18d74b0f7dbe" 2025-05-28 17:08:50.376116 | orchestrator |  } 2025-05-28 17:08:50.377240 | orchestrator |  ] 2025-05-28 17:08:50.377891 | orchestrator |  } 2025-05-28 17:08:50.378467 | orchestrator | } 2025-05-28 17:08:50.379358 | orchestrator | 2025-05-28 17:08:50.380426 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-05-28 17:08:50.381783 | orchestrator | Wednesday 28 May 2025 17:08:50 +0000 (0:00:00.215) 0:00:39.154 ********* 2025-05-28 17:08:51.490838 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-05-28 17:08:51.491011 | orchestrator | 2025-05-28 17:08:51.491093 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 17:08:51.491819 | orchestrator | 2025-05-28 17:08:51 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-28 17:08:51.491897 | orchestrator | 2025-05-28 17:08:51 | INFO  | Please wait and do not abort execution. 2025-05-28 17:08:51.492679 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-28 17:08:51.493924 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-28 17:08:51.494962 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-28 17:08:51.495799 | orchestrator | 2025-05-28 17:08:51.496728 | orchestrator | 2025-05-28 17:08:51.497981 | orchestrator | 2025-05-28 17:08:51.499087 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 17:08:51.499796 | orchestrator | Wednesday 28 May 2025 17:08:51 +0000 (0:00:01.138) 0:00:40.292 ********* 2025-05-28 17:08:51.500415 | orchestrator | =============================================================================== 2025-05-28 17:08:51.501270 | orchestrator | Write configuration file ------------------------------------------------ 4.22s 2025-05-28 17:08:51.502091 | orchestrator | Add known partitions to the list of available block devices ------------- 1.16s 2025-05-28 17:08:51.503151 | orchestrator | Add known links to the list of available block devices ------------------ 1.08s 2025-05-28 17:08:51.503844 | orchestrator | Get initial list of available block devices ----------------------------- 1.04s 2025-05-28 17:08:51.504348 | orchestrator | Add known partitions to the list of available block devices ------------- 0.90s 2025-05-28 17:08:51.505301 | orchestrator | Add known links to the list of available block devices ------------------ 0.90s 2025-05-28 17:08:51.506252 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.81s 2025-05-28 17:08:51.506720 | orchestrator | Add known links to the list of available block devices ------------------ 0.69s 2025-05-28 17:08:51.507533 | orchestrator | Add known partitions to the list of available block devices ------------- 0.68s 2025-05-28 17:08:51.508144 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.66s 2025-05-28 17:08:51.508787 | orchestrator | Add known partitions to the list of available block devices ------------- 0.65s 2025-05-28 17:08:51.509293 | orchestrator | Print DB devices -------------------------------------------------------- 0.63s 2025-05-28 17:08:51.510135 | orchestrator | Print configuration data ------------------------------------------------ 0.61s 2025-05-28 17:08:51.510688 | orchestrator | Add known links to the list of available block devices ------------------ 0.61s 2025-05-28 17:08:51.511073 | orchestrator | Set WAL devices config data --------------------------------------------- 0.61s 2025-05-28 17:08:51.511545 | orchestrator | Add known links to the list of available block devices ------------------ 0.61s 2025-05-28 17:08:51.512017 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.60s 2025-05-28 17:08:51.512563 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.60s 2025-05-28 17:08:51.513092 | orchestrator | Add known partitions to the list of available block devices ------------- 0.56s 2025-05-28 17:08:51.513744 | orchestrator | Add known partitions to the list of available block devices ------------- 0.56s 2025-05-28 17:09:03.593155 | orchestrator | Registering Redlock._acquired_script 2025-05-28 17:09:03.593346 | orchestrator | Registering Redlock._extend_script 2025-05-28 17:09:03.593366 | orchestrator | Registering Redlock._release_script 2025-05-28 17:09:03.636723 | orchestrator | 2025-05-28 17:09:03 | INFO  | Task b0f1034a-f8da-4f22-b832-616cc0abd252 (sync inventory) is running in background. Output coming soon. 2025-05-28 17:09:46.169701 | orchestrator | 2025-05-28 17:09:29 | INFO  | Starting group_vars file reorganization 2025-05-28 17:09:46.169843 | orchestrator | 2025-05-28 17:09:29 | INFO  | Moved 0 file(s) to their respective directories 2025-05-28 17:09:46.169859 | orchestrator | 2025-05-28 17:09:29 | INFO  | Group_vars file reorganization completed 2025-05-28 17:09:46.169872 | orchestrator | 2025-05-28 17:09:31 | INFO  | Starting variable preparation from inventory 2025-05-28 17:09:46.169884 | orchestrator | 2025-05-28 17:09:32 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-05-28 17:09:46.169895 | orchestrator | 2025-05-28 17:09:32 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-05-28 17:09:46.169935 | orchestrator | 2025-05-28 17:09:32 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-05-28 17:09:46.169947 | orchestrator | 2025-05-28 17:09:32 | INFO  | 3 file(s) written, 6 host(s) processed 2025-05-28 17:09:46.169959 | orchestrator | 2025-05-28 17:09:32 | INFO  | Variable preparation completed: 2025-05-28 17:09:46.169970 | orchestrator | 2025-05-28 17:09:33 | INFO  | Starting inventory overwrite handling 2025-05-28 17:09:46.169981 | orchestrator | 2025-05-28 17:09:33 | INFO  | Handling group overwrites in 99-overwrite 2025-05-28 17:09:46.169992 | orchestrator | 2025-05-28 17:09:33 | INFO  | Removing group frr:children from 60-generic 2025-05-28 17:09:46.170003 | orchestrator | 2025-05-28 17:09:33 | INFO  | Removing group storage:children from 50-kolla 2025-05-28 17:09:46.170014 | orchestrator | 2025-05-28 17:09:33 | INFO  | Removing group netbird:children from 50-infrastruture 2025-05-28 17:09:46.170083 | orchestrator | 2025-05-28 17:09:33 | INFO  | Removing group ceph-rgw from 50-ceph 2025-05-28 17:09:46.170095 | orchestrator | 2025-05-28 17:09:33 | INFO  | Removing group ceph-mds from 50-ceph 2025-05-28 17:09:46.170106 | orchestrator | 2025-05-28 17:09:33 | INFO  | Handling group overwrites in 20-roles 2025-05-28 17:09:46.170117 | orchestrator | 2025-05-28 17:09:33 | INFO  | Removing group k3s_node from 50-infrastruture 2025-05-28 17:09:46.170128 | orchestrator | 2025-05-28 17:09:33 | INFO  | Removed 6 group(s) in total 2025-05-28 17:09:46.170139 | orchestrator | 2025-05-28 17:09:33 | INFO  | Inventory overwrite handling completed 2025-05-28 17:09:46.170150 | orchestrator | 2025-05-28 17:09:34 | INFO  | Starting merge of inventory files 2025-05-28 17:09:46.170160 | orchestrator | 2025-05-28 17:09:34 | INFO  | Inventory files merged successfully 2025-05-28 17:09:46.170171 | orchestrator | 2025-05-28 17:09:38 | INFO  | Generating ClusterShell configuration from Ansible inventory 2025-05-28 17:09:46.170182 | orchestrator | 2025-05-28 17:09:45 | INFO  | Successfully wrote ClusterShell configuration 2025-05-28 17:09:48.182592 | orchestrator | 2025-05-28 17:09:48 | INFO  | Task 0d494738-acb7-4e92-927b-9b5c9dbf280b (ceph-create-lvm-devices) was prepared for execution. 2025-05-28 17:09:48.182742 | orchestrator | 2025-05-28 17:09:48 | INFO  | It takes a moment until task 0d494738-acb7-4e92-927b-9b5c9dbf280b (ceph-create-lvm-devices) has been started and output is visible here. 2025-05-28 17:09:52.268736 | orchestrator | 2025-05-28 17:09:52.269555 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-05-28 17:09:52.272154 | orchestrator | 2025-05-28 17:09:52.273395 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-28 17:09:52.274097 | orchestrator | Wednesday 28 May 2025 17:09:52 +0000 (0:00:00.293) 0:00:00.293 ********* 2025-05-28 17:09:52.486339 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-28 17:09:52.486550 | orchestrator | 2025-05-28 17:09:52.487595 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-28 17:09:52.488692 | orchestrator | Wednesday 28 May 2025 17:09:52 +0000 (0:00:00.219) 0:00:00.512 ********* 2025-05-28 17:09:52.692107 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:09:52.692690 | orchestrator | 2025-05-28 17:09:52.693436 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:09:52.694231 | orchestrator | Wednesday 28 May 2025 17:09:52 +0000 (0:00:00.207) 0:00:00.720 ********* 2025-05-28 17:09:53.066582 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-05-28 17:09:53.067499 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-05-28 17:09:53.068541 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-05-28 17:09:53.071101 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-05-28 17:09:53.071384 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-05-28 17:09:53.072801 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-05-28 17:09:53.073421 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-05-28 17:09:53.074495 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-05-28 17:09:53.075162 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-05-28 17:09:53.075837 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-05-28 17:09:53.076316 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-05-28 17:09:53.077111 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-05-28 17:09:53.077331 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-05-28 17:09:53.077829 | orchestrator | 2025-05-28 17:09:53.078355 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:09:53.078811 | orchestrator | Wednesday 28 May 2025 17:09:53 +0000 (0:00:00.374) 0:00:01.094 ********* 2025-05-28 17:09:53.489982 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:09:53.490295 | orchestrator | 2025-05-28 17:09:53.490692 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:09:53.491436 | orchestrator | Wednesday 28 May 2025 17:09:53 +0000 (0:00:00.421) 0:00:01.516 ********* 2025-05-28 17:09:53.678961 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:09:53.679831 | orchestrator | 2025-05-28 17:09:53.680611 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:09:53.681678 | orchestrator | Wednesday 28 May 2025 17:09:53 +0000 (0:00:00.190) 0:00:01.707 ********* 2025-05-28 17:09:53.869917 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:09:53.870600 | orchestrator | 2025-05-28 17:09:53.871661 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:09:53.872790 | orchestrator | Wednesday 28 May 2025 17:09:53 +0000 (0:00:00.190) 0:00:01.897 ********* 2025-05-28 17:09:54.061910 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:09:54.063235 | orchestrator | 2025-05-28 17:09:54.064400 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:09:54.064978 | orchestrator | Wednesday 28 May 2025 17:09:54 +0000 (0:00:00.191) 0:00:02.088 ********* 2025-05-28 17:09:54.258196 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:09:54.258633 | orchestrator | 2025-05-28 17:09:54.260310 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:09:54.261134 | orchestrator | Wednesday 28 May 2025 17:09:54 +0000 (0:00:00.195) 0:00:02.284 ********* 2025-05-28 17:09:54.449948 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:09:54.451271 | orchestrator | 2025-05-28 17:09:54.452044 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:09:54.452652 | orchestrator | Wednesday 28 May 2025 17:09:54 +0000 (0:00:00.190) 0:00:02.475 ********* 2025-05-28 17:09:54.642355 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:09:54.642918 | orchestrator | 2025-05-28 17:09:54.643971 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:09:54.644347 | orchestrator | Wednesday 28 May 2025 17:09:54 +0000 (0:00:00.193) 0:00:02.669 ********* 2025-05-28 17:09:54.839280 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:09:54.840594 | orchestrator | 2025-05-28 17:09:54.841275 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:09:54.842063 | orchestrator | Wednesday 28 May 2025 17:09:54 +0000 (0:00:00.196) 0:00:02.865 ********* 2025-05-28 17:09:55.221588 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_3e07e7c9-91b0-4ca1-b00e-661089b639c5) 2025-05-28 17:09:55.222536 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_3e07e7c9-91b0-4ca1-b00e-661089b639c5) 2025-05-28 17:09:55.223818 | orchestrator | 2025-05-28 17:09:55.224505 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:09:55.225248 | orchestrator | Wednesday 28 May 2025 17:09:55 +0000 (0:00:00.383) 0:00:03.249 ********* 2025-05-28 17:09:55.616602 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_da6420c4-4562-42e6-8445-8de06d590092) 2025-05-28 17:09:55.617664 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_da6420c4-4562-42e6-8445-8de06d590092) 2025-05-28 17:09:55.619445 | orchestrator | 2025-05-28 17:09:55.620804 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:09:55.621666 | orchestrator | Wednesday 28 May 2025 17:09:55 +0000 (0:00:00.393) 0:00:03.642 ********* 2025-05-28 17:09:56.248665 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_66780fe2-f30a-4cd5-a925-045679329f08) 2025-05-28 17:09:56.248906 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_66780fe2-f30a-4cd5-a925-045679329f08) 2025-05-28 17:09:56.249527 | orchestrator | 2025-05-28 17:09:56.251485 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:09:56.253053 | orchestrator | Wednesday 28 May 2025 17:09:56 +0000 (0:00:00.632) 0:00:04.274 ********* 2025-05-28 17:09:57.045917 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_705788e5-cc1d-4d40-94fd-fb0e2f22a483) 2025-05-28 17:09:57.046759 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_705788e5-cc1d-4d40-94fd-fb0e2f22a483) 2025-05-28 17:09:57.047831 | orchestrator | 2025-05-28 17:09:57.048680 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:09:57.049539 | orchestrator | Wednesday 28 May 2025 17:09:57 +0000 (0:00:00.797) 0:00:05.072 ********* 2025-05-28 17:09:57.351387 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-28 17:09:57.353972 | orchestrator | 2025-05-28 17:09:57.354356 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:09:57.354655 | orchestrator | Wednesday 28 May 2025 17:09:57 +0000 (0:00:00.304) 0:00:05.376 ********* 2025-05-28 17:09:57.750662 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-05-28 17:09:57.753396 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-05-28 17:09:57.753421 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-05-28 17:09:57.753951 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-05-28 17:09:57.755265 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-05-28 17:09:57.755714 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-05-28 17:09:57.757590 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-05-28 17:09:57.758732 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-05-28 17:09:57.760562 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-05-28 17:09:57.761365 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-05-28 17:09:57.762273 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-05-28 17:09:57.763132 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-05-28 17:09:57.763746 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-05-28 17:09:57.764024 | orchestrator | 2025-05-28 17:09:57.764492 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:09:57.764906 | orchestrator | Wednesday 28 May 2025 17:09:57 +0000 (0:00:00.399) 0:00:05.776 ********* 2025-05-28 17:09:57.938074 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:09:57.938545 | orchestrator | 2025-05-28 17:09:57.939316 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:09:57.940207 | orchestrator | Wednesday 28 May 2025 17:09:57 +0000 (0:00:00.188) 0:00:05.964 ********* 2025-05-28 17:09:58.135324 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:09:58.135668 | orchestrator | 2025-05-28 17:09:58.137141 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:09:58.137684 | orchestrator | Wednesday 28 May 2025 17:09:58 +0000 (0:00:00.199) 0:00:06.163 ********* 2025-05-28 17:09:58.323417 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:09:58.323902 | orchestrator | 2025-05-28 17:09:58.325332 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:09:58.326217 | orchestrator | Wednesday 28 May 2025 17:09:58 +0000 (0:00:00.187) 0:00:06.350 ********* 2025-05-28 17:09:58.514384 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:09:58.514702 | orchestrator | 2025-05-28 17:09:58.515763 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:09:58.516669 | orchestrator | Wednesday 28 May 2025 17:09:58 +0000 (0:00:00.189) 0:00:06.540 ********* 2025-05-28 17:09:58.706379 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:09:58.706830 | orchestrator | 2025-05-28 17:09:58.707304 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:09:58.708040 | orchestrator | Wednesday 28 May 2025 17:09:58 +0000 (0:00:00.194) 0:00:06.734 ********* 2025-05-28 17:09:58.897313 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:09:58.897918 | orchestrator | 2025-05-28 17:09:58.898667 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:09:58.899333 | orchestrator | Wednesday 28 May 2025 17:09:58 +0000 (0:00:00.190) 0:00:06.924 ********* 2025-05-28 17:09:59.090266 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:09:59.090686 | orchestrator | 2025-05-28 17:09:59.092232 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:09:59.092673 | orchestrator | Wednesday 28 May 2025 17:09:59 +0000 (0:00:00.192) 0:00:07.117 ********* 2025-05-28 17:09:59.281469 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:09:59.281598 | orchestrator | 2025-05-28 17:09:59.282309 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:09:59.282934 | orchestrator | Wednesday 28 May 2025 17:09:59 +0000 (0:00:00.191) 0:00:07.308 ********* 2025-05-28 17:10:00.270397 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-05-28 17:10:00.270593 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-05-28 17:10:00.271374 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-05-28 17:10:00.272271 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-05-28 17:10:00.272876 | orchestrator | 2025-05-28 17:10:00.273706 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:10:00.274501 | orchestrator | Wednesday 28 May 2025 17:10:00 +0000 (0:00:00.986) 0:00:08.295 ********* 2025-05-28 17:10:00.457697 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:10:00.457824 | orchestrator | 2025-05-28 17:10:00.458229 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:10:00.458801 | orchestrator | Wednesday 28 May 2025 17:10:00 +0000 (0:00:00.189) 0:00:08.484 ********* 2025-05-28 17:10:00.648152 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:10:00.648413 | orchestrator | 2025-05-28 17:10:00.649005 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:10:00.649952 | orchestrator | Wednesday 28 May 2025 17:10:00 +0000 (0:00:00.189) 0:00:08.674 ********* 2025-05-28 17:10:00.845671 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:10:00.845925 | orchestrator | 2025-05-28 17:10:00.847070 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:10:00.848100 | orchestrator | Wednesday 28 May 2025 17:10:00 +0000 (0:00:00.199) 0:00:08.873 ********* 2025-05-28 17:10:01.039627 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:10:01.039763 | orchestrator | 2025-05-28 17:10:01.039787 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-05-28 17:10:01.040405 | orchestrator | Wednesday 28 May 2025 17:10:01 +0000 (0:00:00.193) 0:00:09.066 ********* 2025-05-28 17:10:01.175509 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:10:01.176424 | orchestrator | 2025-05-28 17:10:01.177542 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-05-28 17:10:01.178541 | orchestrator | Wednesday 28 May 2025 17:10:01 +0000 (0:00:00.136) 0:00:09.202 ********* 2025-05-28 17:10:01.350280 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b27f73ed-a290-5ab5-82ba-70ebe910dd97'}}) 2025-05-28 17:10:01.350779 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'fbdc558b-af0f-50ef-b610-4a3c4fb87cac'}}) 2025-05-28 17:10:01.352112 | orchestrator | 2025-05-28 17:10:01.353071 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-05-28 17:10:01.358000 | orchestrator | Wednesday 28 May 2025 17:10:01 +0000 (0:00:00.174) 0:00:09.377 ********* 2025-05-28 17:10:03.420195 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-b27f73ed-a290-5ab5-82ba-70ebe910dd97', 'data_vg': 'ceph-b27f73ed-a290-5ab5-82ba-70ebe910dd97'}) 2025-05-28 17:10:03.421226 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-fbdc558b-af0f-50ef-b610-4a3c4fb87cac', 'data_vg': 'ceph-fbdc558b-af0f-50ef-b610-4a3c4fb87cac'}) 2025-05-28 17:10:03.422896 | orchestrator | 2025-05-28 17:10:03.425117 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-05-28 17:10:03.425272 | orchestrator | Wednesday 28 May 2025 17:10:03 +0000 (0:00:02.069) 0:00:11.446 ********* 2025-05-28 17:10:03.574574 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b27f73ed-a290-5ab5-82ba-70ebe910dd97', 'data_vg': 'ceph-b27f73ed-a290-5ab5-82ba-70ebe910dd97'})  2025-05-28 17:10:03.574732 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fbdc558b-af0f-50ef-b610-4a3c4fb87cac', 'data_vg': 'ceph-fbdc558b-af0f-50ef-b610-4a3c4fb87cac'})  2025-05-28 17:10:03.575546 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:10:03.576086 | orchestrator | 2025-05-28 17:10:03.577098 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-05-28 17:10:03.577654 | orchestrator | Wednesday 28 May 2025 17:10:03 +0000 (0:00:00.155) 0:00:11.601 ********* 2025-05-28 17:10:05.001116 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-b27f73ed-a290-5ab5-82ba-70ebe910dd97', 'data_vg': 'ceph-b27f73ed-a290-5ab5-82ba-70ebe910dd97'}) 2025-05-28 17:10:05.001835 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-fbdc558b-af0f-50ef-b610-4a3c4fb87cac', 'data_vg': 'ceph-fbdc558b-af0f-50ef-b610-4a3c4fb87cac'}) 2025-05-28 17:10:05.002626 | orchestrator | 2025-05-28 17:10:05.003846 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-05-28 17:10:05.003874 | orchestrator | Wednesday 28 May 2025 17:10:04 +0000 (0:00:01.425) 0:00:13.027 ********* 2025-05-28 17:10:05.147246 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b27f73ed-a290-5ab5-82ba-70ebe910dd97', 'data_vg': 'ceph-b27f73ed-a290-5ab5-82ba-70ebe910dd97'})  2025-05-28 17:10:05.147346 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fbdc558b-af0f-50ef-b610-4a3c4fb87cac', 'data_vg': 'ceph-fbdc558b-af0f-50ef-b610-4a3c4fb87cac'})  2025-05-28 17:10:05.147359 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:10:05.147371 | orchestrator | 2025-05-28 17:10:05.147436 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-05-28 17:10:05.147901 | orchestrator | Wednesday 28 May 2025 17:10:05 +0000 (0:00:00.145) 0:00:13.172 ********* 2025-05-28 17:10:05.280550 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:10:05.281492 | orchestrator | 2025-05-28 17:10:05.282533 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-05-28 17:10:05.283512 | orchestrator | Wednesday 28 May 2025 17:10:05 +0000 (0:00:00.135) 0:00:13.307 ********* 2025-05-28 17:10:05.612869 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b27f73ed-a290-5ab5-82ba-70ebe910dd97', 'data_vg': 'ceph-b27f73ed-a290-5ab5-82ba-70ebe910dd97'})  2025-05-28 17:10:05.614491 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fbdc558b-af0f-50ef-b610-4a3c4fb87cac', 'data_vg': 'ceph-fbdc558b-af0f-50ef-b610-4a3c4fb87cac'})  2025-05-28 17:10:05.616049 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:10:05.616730 | orchestrator | 2025-05-28 17:10:05.617521 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-05-28 17:10:05.618252 | orchestrator | Wednesday 28 May 2025 17:10:05 +0000 (0:00:00.330) 0:00:13.638 ********* 2025-05-28 17:10:05.747748 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:10:05.749319 | orchestrator | 2025-05-28 17:10:05.751361 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-05-28 17:10:05.752372 | orchestrator | Wednesday 28 May 2025 17:10:05 +0000 (0:00:00.136) 0:00:13.774 ********* 2025-05-28 17:10:05.892240 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b27f73ed-a290-5ab5-82ba-70ebe910dd97', 'data_vg': 'ceph-b27f73ed-a290-5ab5-82ba-70ebe910dd97'})  2025-05-28 17:10:05.893035 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fbdc558b-af0f-50ef-b610-4a3c4fb87cac', 'data_vg': 'ceph-fbdc558b-af0f-50ef-b610-4a3c4fb87cac'})  2025-05-28 17:10:05.894347 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:10:05.896356 | orchestrator | 2025-05-28 17:10:05.896415 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-05-28 17:10:05.896772 | orchestrator | Wednesday 28 May 2025 17:10:05 +0000 (0:00:00.144) 0:00:13.919 ********* 2025-05-28 17:10:06.026688 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:10:06.026858 | orchestrator | 2025-05-28 17:10:06.028546 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-05-28 17:10:06.029632 | orchestrator | Wednesday 28 May 2025 17:10:06 +0000 (0:00:00.133) 0:00:14.053 ********* 2025-05-28 17:10:06.171745 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b27f73ed-a290-5ab5-82ba-70ebe910dd97', 'data_vg': 'ceph-b27f73ed-a290-5ab5-82ba-70ebe910dd97'})  2025-05-28 17:10:06.171930 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fbdc558b-af0f-50ef-b610-4a3c4fb87cac', 'data_vg': 'ceph-fbdc558b-af0f-50ef-b610-4a3c4fb87cac'})  2025-05-28 17:10:06.173417 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:10:06.174258 | orchestrator | 2025-05-28 17:10:06.174856 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-05-28 17:10:06.175761 | orchestrator | Wednesday 28 May 2025 17:10:06 +0000 (0:00:00.144) 0:00:14.198 ********* 2025-05-28 17:10:06.295815 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:10:06.296368 | orchestrator | 2025-05-28 17:10:06.297450 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-05-28 17:10:06.298141 | orchestrator | Wednesday 28 May 2025 17:10:06 +0000 (0:00:00.125) 0:00:14.323 ********* 2025-05-28 17:10:06.433327 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b27f73ed-a290-5ab5-82ba-70ebe910dd97', 'data_vg': 'ceph-b27f73ed-a290-5ab5-82ba-70ebe910dd97'})  2025-05-28 17:10:06.434283 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fbdc558b-af0f-50ef-b610-4a3c4fb87cac', 'data_vg': 'ceph-fbdc558b-af0f-50ef-b610-4a3c4fb87cac'})  2025-05-28 17:10:06.435649 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:10:06.437466 | orchestrator | 2025-05-28 17:10:06.437495 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-05-28 17:10:06.438765 | orchestrator | Wednesday 28 May 2025 17:10:06 +0000 (0:00:00.137) 0:00:14.461 ********* 2025-05-28 17:10:06.579195 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b27f73ed-a290-5ab5-82ba-70ebe910dd97', 'data_vg': 'ceph-b27f73ed-a290-5ab5-82ba-70ebe910dd97'})  2025-05-28 17:10:06.579535 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fbdc558b-af0f-50ef-b610-4a3c4fb87cac', 'data_vg': 'ceph-fbdc558b-af0f-50ef-b610-4a3c4fb87cac'})  2025-05-28 17:10:06.581302 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:10:06.581487 | orchestrator | 2025-05-28 17:10:06.582360 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-05-28 17:10:06.583275 | orchestrator | Wednesday 28 May 2025 17:10:06 +0000 (0:00:00.143) 0:00:14.604 ********* 2025-05-28 17:10:06.721283 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b27f73ed-a290-5ab5-82ba-70ebe910dd97', 'data_vg': 'ceph-b27f73ed-a290-5ab5-82ba-70ebe910dd97'})  2025-05-28 17:10:06.721466 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fbdc558b-af0f-50ef-b610-4a3c4fb87cac', 'data_vg': 'ceph-fbdc558b-af0f-50ef-b610-4a3c4fb87cac'})  2025-05-28 17:10:06.722310 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:10:06.722854 | orchestrator | 2025-05-28 17:10:06.723584 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-05-28 17:10:06.724041 | orchestrator | Wednesday 28 May 2025 17:10:06 +0000 (0:00:00.144) 0:00:14.749 ********* 2025-05-28 17:10:06.854762 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:10:06.856277 | orchestrator | 2025-05-28 17:10:06.856486 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-05-28 17:10:06.859109 | orchestrator | Wednesday 28 May 2025 17:10:06 +0000 (0:00:00.132) 0:00:14.882 ********* 2025-05-28 17:10:06.997721 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:10:06.997944 | orchestrator | 2025-05-28 17:10:06.999345 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-05-28 17:10:07.000741 | orchestrator | Wednesday 28 May 2025 17:10:06 +0000 (0:00:00.142) 0:00:15.024 ********* 2025-05-28 17:10:07.126187 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:10:07.126365 | orchestrator | 2025-05-28 17:10:07.126809 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-05-28 17:10:07.126991 | orchestrator | Wednesday 28 May 2025 17:10:07 +0000 (0:00:00.130) 0:00:15.154 ********* 2025-05-28 17:10:07.437185 | orchestrator | ok: [testbed-node-3] => { 2025-05-28 17:10:07.438272 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-05-28 17:10:07.438962 | orchestrator | } 2025-05-28 17:10:07.440552 | orchestrator | 2025-05-28 17:10:07.441385 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-05-28 17:10:07.442424 | orchestrator | Wednesday 28 May 2025 17:10:07 +0000 (0:00:00.310) 0:00:15.465 ********* 2025-05-28 17:10:07.571946 | orchestrator | ok: [testbed-node-3] => { 2025-05-28 17:10:07.572274 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-05-28 17:10:07.573985 | orchestrator | } 2025-05-28 17:10:07.574683 | orchestrator | 2025-05-28 17:10:07.575606 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-05-28 17:10:07.576971 | orchestrator | Wednesday 28 May 2025 17:10:07 +0000 (0:00:00.133) 0:00:15.598 ********* 2025-05-28 17:10:07.707307 | orchestrator | ok: [testbed-node-3] => { 2025-05-28 17:10:07.707794 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-05-28 17:10:07.708468 | orchestrator | } 2025-05-28 17:10:07.709335 | orchestrator | 2025-05-28 17:10:07.710845 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-05-28 17:10:07.711994 | orchestrator | Wednesday 28 May 2025 17:10:07 +0000 (0:00:00.135) 0:00:15.734 ********* 2025-05-28 17:10:08.326253 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:10:08.328366 | orchestrator | 2025-05-28 17:10:08.328943 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-05-28 17:10:08.330701 | orchestrator | Wednesday 28 May 2025 17:10:08 +0000 (0:00:00.618) 0:00:16.353 ********* 2025-05-28 17:10:08.863473 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:10:08.863586 | orchestrator | 2025-05-28 17:10:08.863946 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-05-28 17:10:08.864636 | orchestrator | Wednesday 28 May 2025 17:10:08 +0000 (0:00:00.535) 0:00:16.889 ********* 2025-05-28 17:10:09.377873 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:10:09.380297 | orchestrator | 2025-05-28 17:10:09.381233 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-05-28 17:10:09.382323 | orchestrator | Wednesday 28 May 2025 17:10:09 +0000 (0:00:00.512) 0:00:17.401 ********* 2025-05-28 17:10:09.515527 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:10:09.515658 | orchestrator | 2025-05-28 17:10:09.515896 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-05-28 17:10:09.518007 | orchestrator | Wednesday 28 May 2025 17:10:09 +0000 (0:00:00.141) 0:00:17.543 ********* 2025-05-28 17:10:09.627085 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:10:09.628201 | orchestrator | 2025-05-28 17:10:09.629414 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-05-28 17:10:09.630101 | orchestrator | Wednesday 28 May 2025 17:10:09 +0000 (0:00:00.109) 0:00:17.653 ********* 2025-05-28 17:10:09.731440 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:10:09.731972 | orchestrator | 2025-05-28 17:10:09.732677 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-05-28 17:10:09.733444 | orchestrator | Wednesday 28 May 2025 17:10:09 +0000 (0:00:00.105) 0:00:17.758 ********* 2025-05-28 17:10:09.871126 | orchestrator | ok: [testbed-node-3] => { 2025-05-28 17:10:09.872050 | orchestrator |  "vgs_report": { 2025-05-28 17:10:09.873217 | orchestrator |  "vg": [] 2025-05-28 17:10:09.873544 | orchestrator |  } 2025-05-28 17:10:09.874458 | orchestrator | } 2025-05-28 17:10:09.875241 | orchestrator | 2025-05-28 17:10:09.875730 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-05-28 17:10:09.875861 | orchestrator | Wednesday 28 May 2025 17:10:09 +0000 (0:00:00.140) 0:00:17.899 ********* 2025-05-28 17:10:10.005277 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:10:10.005770 | orchestrator | 2025-05-28 17:10:10.006503 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-05-28 17:10:10.007246 | orchestrator | Wednesday 28 May 2025 17:10:09 +0000 (0:00:00.134) 0:00:18.033 ********* 2025-05-28 17:10:10.135714 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:10:10.135819 | orchestrator | 2025-05-28 17:10:10.136446 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-05-28 17:10:10.136884 | orchestrator | Wednesday 28 May 2025 17:10:10 +0000 (0:00:00.126) 0:00:18.160 ********* 2025-05-28 17:10:10.427471 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:10:10.428082 | orchestrator | 2025-05-28 17:10:10.428569 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-05-28 17:10:10.429492 | orchestrator | Wednesday 28 May 2025 17:10:10 +0000 (0:00:00.294) 0:00:18.454 ********* 2025-05-28 17:10:10.558230 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:10:10.558413 | orchestrator | 2025-05-28 17:10:10.560171 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-05-28 17:10:10.560513 | orchestrator | Wednesday 28 May 2025 17:10:10 +0000 (0:00:00.129) 0:00:18.584 ********* 2025-05-28 17:10:10.687907 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:10:10.688657 | orchestrator | 2025-05-28 17:10:10.689433 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-05-28 17:10:10.690291 | orchestrator | Wednesday 28 May 2025 17:10:10 +0000 (0:00:00.131) 0:00:18.715 ********* 2025-05-28 17:10:10.815687 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:10:10.815837 | orchestrator | 2025-05-28 17:10:10.816398 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-05-28 17:10:10.816861 | orchestrator | Wednesday 28 May 2025 17:10:10 +0000 (0:00:00.127) 0:00:18.843 ********* 2025-05-28 17:10:10.943843 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:10:10.944068 | orchestrator | 2025-05-28 17:10:10.944981 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-05-28 17:10:10.945810 | orchestrator | Wednesday 28 May 2025 17:10:10 +0000 (0:00:00.128) 0:00:18.971 ********* 2025-05-28 17:10:11.075241 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:10:11.075653 | orchestrator | 2025-05-28 17:10:11.076958 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-05-28 17:10:11.077771 | orchestrator | Wednesday 28 May 2025 17:10:11 +0000 (0:00:00.130) 0:00:19.102 ********* 2025-05-28 17:10:11.203823 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:10:11.203996 | orchestrator | 2025-05-28 17:10:11.204016 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-05-28 17:10:11.204874 | orchestrator | Wednesday 28 May 2025 17:10:11 +0000 (0:00:00.128) 0:00:19.230 ********* 2025-05-28 17:10:11.331997 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:10:11.333105 | orchestrator | 2025-05-28 17:10:11.334073 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-05-28 17:10:11.335119 | orchestrator | Wednesday 28 May 2025 17:10:11 +0000 (0:00:00.128) 0:00:19.359 ********* 2025-05-28 17:10:11.459125 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:10:11.459683 | orchestrator | 2025-05-28 17:10:11.461255 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-05-28 17:10:11.462290 | orchestrator | Wednesday 28 May 2025 17:10:11 +0000 (0:00:00.127) 0:00:19.487 ********* 2025-05-28 17:10:11.591031 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:10:11.591634 | orchestrator | 2025-05-28 17:10:11.596173 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-05-28 17:10:11.596554 | orchestrator | Wednesday 28 May 2025 17:10:11 +0000 (0:00:00.131) 0:00:19.618 ********* 2025-05-28 17:10:11.720758 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:10:11.720854 | orchestrator | 2025-05-28 17:10:11.722009 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-05-28 17:10:11.723102 | orchestrator | Wednesday 28 May 2025 17:10:11 +0000 (0:00:00.129) 0:00:19.748 ********* 2025-05-28 17:10:11.850352 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:10:11.850826 | orchestrator | 2025-05-28 17:10:11.851774 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-05-28 17:10:11.853201 | orchestrator | Wednesday 28 May 2025 17:10:11 +0000 (0:00:00.129) 0:00:19.877 ********* 2025-05-28 17:10:12.170392 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b27f73ed-a290-5ab5-82ba-70ebe910dd97', 'data_vg': 'ceph-b27f73ed-a290-5ab5-82ba-70ebe910dd97'})  2025-05-28 17:10:12.170817 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fbdc558b-af0f-50ef-b610-4a3c4fb87cac', 'data_vg': 'ceph-fbdc558b-af0f-50ef-b610-4a3c4fb87cac'})  2025-05-28 17:10:12.172493 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:10:12.172966 | orchestrator | 2025-05-28 17:10:12.174394 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-05-28 17:10:12.175093 | orchestrator | Wednesday 28 May 2025 17:10:12 +0000 (0:00:00.318) 0:00:20.197 ********* 2025-05-28 17:10:12.306389 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b27f73ed-a290-5ab5-82ba-70ebe910dd97', 'data_vg': 'ceph-b27f73ed-a290-5ab5-82ba-70ebe910dd97'})  2025-05-28 17:10:12.306487 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fbdc558b-af0f-50ef-b610-4a3c4fb87cac', 'data_vg': 'ceph-fbdc558b-af0f-50ef-b610-4a3c4fb87cac'})  2025-05-28 17:10:12.306941 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:10:12.309464 | orchestrator | 2025-05-28 17:10:12.310520 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-05-28 17:10:12.311258 | orchestrator | Wednesday 28 May 2025 17:10:12 +0000 (0:00:00.135) 0:00:20.332 ********* 2025-05-28 17:10:12.439902 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b27f73ed-a290-5ab5-82ba-70ebe910dd97', 'data_vg': 'ceph-b27f73ed-a290-5ab5-82ba-70ebe910dd97'})  2025-05-28 17:10:12.440293 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fbdc558b-af0f-50ef-b610-4a3c4fb87cac', 'data_vg': 'ceph-fbdc558b-af0f-50ef-b610-4a3c4fb87cac'})  2025-05-28 17:10:12.441230 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:10:12.441785 | orchestrator | 2025-05-28 17:10:12.443557 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-05-28 17:10:12.443599 | orchestrator | Wednesday 28 May 2025 17:10:12 +0000 (0:00:00.135) 0:00:20.467 ********* 2025-05-28 17:10:12.578753 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b27f73ed-a290-5ab5-82ba-70ebe910dd97', 'data_vg': 'ceph-b27f73ed-a290-5ab5-82ba-70ebe910dd97'})  2025-05-28 17:10:12.578963 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fbdc558b-af0f-50ef-b610-4a3c4fb87cac', 'data_vg': 'ceph-fbdc558b-af0f-50ef-b610-4a3c4fb87cac'})  2025-05-28 17:10:12.580009 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:10:12.580939 | orchestrator | 2025-05-28 17:10:12.582007 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-05-28 17:10:12.582770 | orchestrator | Wednesday 28 May 2025 17:10:12 +0000 (0:00:00.138) 0:00:20.606 ********* 2025-05-28 17:10:12.725390 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b27f73ed-a290-5ab5-82ba-70ebe910dd97', 'data_vg': 'ceph-b27f73ed-a290-5ab5-82ba-70ebe910dd97'})  2025-05-28 17:10:12.726478 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fbdc558b-af0f-50ef-b610-4a3c4fb87cac', 'data_vg': 'ceph-fbdc558b-af0f-50ef-b610-4a3c4fb87cac'})  2025-05-28 17:10:12.727124 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:10:12.728053 | orchestrator | 2025-05-28 17:10:12.728753 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-05-28 17:10:12.729616 | orchestrator | Wednesday 28 May 2025 17:10:12 +0000 (0:00:00.146) 0:00:20.752 ********* 2025-05-28 17:10:12.862619 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b27f73ed-a290-5ab5-82ba-70ebe910dd97', 'data_vg': 'ceph-b27f73ed-a290-5ab5-82ba-70ebe910dd97'})  2025-05-28 17:10:12.862837 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fbdc558b-af0f-50ef-b610-4a3c4fb87cac', 'data_vg': 'ceph-fbdc558b-af0f-50ef-b610-4a3c4fb87cac'})  2025-05-28 17:10:12.863696 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:10:12.864480 | orchestrator | 2025-05-28 17:10:12.865242 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-05-28 17:10:12.865617 | orchestrator | Wednesday 28 May 2025 17:10:12 +0000 (0:00:00.138) 0:00:20.891 ********* 2025-05-28 17:10:13.016954 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b27f73ed-a290-5ab5-82ba-70ebe910dd97', 'data_vg': 'ceph-b27f73ed-a290-5ab5-82ba-70ebe910dd97'})  2025-05-28 17:10:13.017050 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fbdc558b-af0f-50ef-b610-4a3c4fb87cac', 'data_vg': 'ceph-fbdc558b-af0f-50ef-b610-4a3c4fb87cac'})  2025-05-28 17:10:13.018623 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:10:13.020717 | orchestrator | 2025-05-28 17:10:13.022200 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-05-28 17:10:13.022781 | orchestrator | Wednesday 28 May 2025 17:10:13 +0000 (0:00:00.152) 0:00:21.043 ********* 2025-05-28 17:10:13.181088 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b27f73ed-a290-5ab5-82ba-70ebe910dd97', 'data_vg': 'ceph-b27f73ed-a290-5ab5-82ba-70ebe910dd97'})  2025-05-28 17:10:13.181199 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fbdc558b-af0f-50ef-b610-4a3c4fb87cac', 'data_vg': 'ceph-fbdc558b-af0f-50ef-b610-4a3c4fb87cac'})  2025-05-28 17:10:13.181571 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:10:13.183712 | orchestrator | 2025-05-28 17:10:13.184645 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-05-28 17:10:13.184939 | orchestrator | Wednesday 28 May 2025 17:10:13 +0000 (0:00:00.166) 0:00:21.209 ********* 2025-05-28 17:10:13.681356 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:10:13.682355 | orchestrator | 2025-05-28 17:10:13.682894 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-05-28 17:10:13.684468 | orchestrator | Wednesday 28 May 2025 17:10:13 +0000 (0:00:00.498) 0:00:21.707 ********* 2025-05-28 17:10:14.173529 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:10:14.173722 | orchestrator | 2025-05-28 17:10:14.174110 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-05-28 17:10:14.175097 | orchestrator | Wednesday 28 May 2025 17:10:14 +0000 (0:00:00.493) 0:00:22.201 ********* 2025-05-28 17:10:14.326418 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:10:14.328499 | orchestrator | 2025-05-28 17:10:14.328719 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-05-28 17:10:14.329655 | orchestrator | Wednesday 28 May 2025 17:10:14 +0000 (0:00:00.152) 0:00:22.353 ********* 2025-05-28 17:10:14.482701 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-b27f73ed-a290-5ab5-82ba-70ebe910dd97', 'vg_name': 'ceph-b27f73ed-a290-5ab5-82ba-70ebe910dd97'}) 2025-05-28 17:10:14.483475 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-fbdc558b-af0f-50ef-b610-4a3c4fb87cac', 'vg_name': 'ceph-fbdc558b-af0f-50ef-b610-4a3c4fb87cac'}) 2025-05-28 17:10:14.483762 | orchestrator | 2025-05-28 17:10:14.484686 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-05-28 17:10:14.485336 | orchestrator | Wednesday 28 May 2025 17:10:14 +0000 (0:00:00.156) 0:00:22.510 ********* 2025-05-28 17:10:14.809739 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b27f73ed-a290-5ab5-82ba-70ebe910dd97', 'data_vg': 'ceph-b27f73ed-a290-5ab5-82ba-70ebe910dd97'})  2025-05-28 17:10:14.809884 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fbdc558b-af0f-50ef-b610-4a3c4fb87cac', 'data_vg': 'ceph-fbdc558b-af0f-50ef-b610-4a3c4fb87cac'})  2025-05-28 17:10:14.811552 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:10:14.812872 | orchestrator | 2025-05-28 17:10:14.814453 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-05-28 17:10:14.815752 | orchestrator | Wednesday 28 May 2025 17:10:14 +0000 (0:00:00.322) 0:00:22.832 ********* 2025-05-28 17:10:14.961912 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b27f73ed-a290-5ab5-82ba-70ebe910dd97', 'data_vg': 'ceph-b27f73ed-a290-5ab5-82ba-70ebe910dd97'})  2025-05-28 17:10:14.962410 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fbdc558b-af0f-50ef-b610-4a3c4fb87cac', 'data_vg': 'ceph-fbdc558b-af0f-50ef-b610-4a3c4fb87cac'})  2025-05-28 17:10:14.963633 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:10:14.964905 | orchestrator | 2025-05-28 17:10:14.965679 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-05-28 17:10:14.966218 | orchestrator | Wednesday 28 May 2025 17:10:14 +0000 (0:00:00.154) 0:00:22.987 ********* 2025-05-28 17:10:15.121821 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b27f73ed-a290-5ab5-82ba-70ebe910dd97', 'data_vg': 'ceph-b27f73ed-a290-5ab5-82ba-70ebe910dd97'})  2025-05-28 17:10:15.121988 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fbdc558b-af0f-50ef-b610-4a3c4fb87cac', 'data_vg': 'ceph-fbdc558b-af0f-50ef-b610-4a3c4fb87cac'})  2025-05-28 17:10:15.122772 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:10:15.123535 | orchestrator | 2025-05-28 17:10:15.126524 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-05-28 17:10:15.126643 | orchestrator | Wednesday 28 May 2025 17:10:15 +0000 (0:00:00.162) 0:00:23.149 ********* 2025-05-28 17:10:15.402545 | orchestrator | ok: [testbed-node-3] => { 2025-05-28 17:10:15.402737 | orchestrator |  "lvm_report": { 2025-05-28 17:10:15.403082 | orchestrator |  "lv": [ 2025-05-28 17:10:15.403896 | orchestrator |  { 2025-05-28 17:10:15.405417 | orchestrator |  "lv_name": "osd-block-b27f73ed-a290-5ab5-82ba-70ebe910dd97", 2025-05-28 17:10:15.405628 | orchestrator |  "vg_name": "ceph-b27f73ed-a290-5ab5-82ba-70ebe910dd97" 2025-05-28 17:10:15.406323 | orchestrator |  }, 2025-05-28 17:10:15.406535 | orchestrator |  { 2025-05-28 17:10:15.406995 | orchestrator |  "lv_name": "osd-block-fbdc558b-af0f-50ef-b610-4a3c4fb87cac", 2025-05-28 17:10:15.407670 | orchestrator |  "vg_name": "ceph-fbdc558b-af0f-50ef-b610-4a3c4fb87cac" 2025-05-28 17:10:15.409767 | orchestrator |  } 2025-05-28 17:10:15.410177 | orchestrator |  ], 2025-05-28 17:10:15.410436 | orchestrator |  "pv": [ 2025-05-28 17:10:15.411612 | orchestrator |  { 2025-05-28 17:10:15.412737 | orchestrator |  "pv_name": "/dev/sdb", 2025-05-28 17:10:15.413197 | orchestrator |  "vg_name": "ceph-b27f73ed-a290-5ab5-82ba-70ebe910dd97" 2025-05-28 17:10:15.413580 | orchestrator |  }, 2025-05-28 17:10:15.413992 | orchestrator |  { 2025-05-28 17:10:15.414488 | orchestrator |  "pv_name": "/dev/sdc", 2025-05-28 17:10:15.414957 | orchestrator |  "vg_name": "ceph-fbdc558b-af0f-50ef-b610-4a3c4fb87cac" 2025-05-28 17:10:15.415360 | orchestrator |  } 2025-05-28 17:10:15.416286 | orchestrator |  ] 2025-05-28 17:10:15.417109 | orchestrator |  } 2025-05-28 17:10:15.417909 | orchestrator | } 2025-05-28 17:10:15.418607 | orchestrator | 2025-05-28 17:10:15.418791 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-05-28 17:10:15.419395 | orchestrator | 2025-05-28 17:10:15.419911 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-28 17:10:15.420714 | orchestrator | Wednesday 28 May 2025 17:10:15 +0000 (0:00:00.278) 0:00:23.428 ********* 2025-05-28 17:10:15.634454 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-05-28 17:10:15.634683 | orchestrator | 2025-05-28 17:10:15.635315 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-28 17:10:15.636446 | orchestrator | Wednesday 28 May 2025 17:10:15 +0000 (0:00:00.233) 0:00:23.661 ********* 2025-05-28 17:10:15.851748 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:10:15.852375 | orchestrator | 2025-05-28 17:10:15.853430 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:10:15.854190 | orchestrator | Wednesday 28 May 2025 17:10:15 +0000 (0:00:00.216) 0:00:23.877 ********* 2025-05-28 17:10:16.236087 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-05-28 17:10:16.236933 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-05-28 17:10:16.237789 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-05-28 17:10:16.239776 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-05-28 17:10:16.239856 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-05-28 17:10:16.241026 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-05-28 17:10:16.242341 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-05-28 17:10:16.243080 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-05-28 17:10:16.244142 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-05-28 17:10:16.245238 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-05-28 17:10:16.246200 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-05-28 17:10:16.247051 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-05-28 17:10:16.247784 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-05-28 17:10:16.248894 | orchestrator | 2025-05-28 17:10:16.250368 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:10:16.251495 | orchestrator | Wednesday 28 May 2025 17:10:16 +0000 (0:00:00.385) 0:00:24.263 ********* 2025-05-28 17:10:16.430707 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:10:16.431501 | orchestrator | 2025-05-28 17:10:16.432469 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:10:16.434985 | orchestrator | Wednesday 28 May 2025 17:10:16 +0000 (0:00:00.194) 0:00:24.458 ********* 2025-05-28 17:10:16.604754 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:10:16.605475 | orchestrator | 2025-05-28 17:10:16.606396 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:10:16.606934 | orchestrator | Wednesday 28 May 2025 17:10:16 +0000 (0:00:00.174) 0:00:24.632 ********* 2025-05-28 17:10:17.174654 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:10:17.175302 | orchestrator | 2025-05-28 17:10:17.175879 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:10:17.177309 | orchestrator | Wednesday 28 May 2025 17:10:17 +0000 (0:00:00.568) 0:00:25.200 ********* 2025-05-28 17:10:17.376279 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:10:17.376813 | orchestrator | 2025-05-28 17:10:17.377873 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:10:17.378941 | orchestrator | Wednesday 28 May 2025 17:10:17 +0000 (0:00:00.202) 0:00:25.403 ********* 2025-05-28 17:10:17.564896 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:10:17.565897 | orchestrator | 2025-05-28 17:10:17.567332 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:10:17.568395 | orchestrator | Wednesday 28 May 2025 17:10:17 +0000 (0:00:00.185) 0:00:25.589 ********* 2025-05-28 17:10:17.754415 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:10:17.755096 | orchestrator | 2025-05-28 17:10:17.756398 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:10:17.757329 | orchestrator | Wednesday 28 May 2025 17:10:17 +0000 (0:00:00.191) 0:00:25.780 ********* 2025-05-28 17:10:17.965990 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:10:17.969032 | orchestrator | 2025-05-28 17:10:17.969082 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:10:17.969136 | orchestrator | Wednesday 28 May 2025 17:10:17 +0000 (0:00:00.208) 0:00:25.989 ********* 2025-05-28 17:10:18.159556 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:10:18.160260 | orchestrator | 2025-05-28 17:10:18.160486 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:10:18.160945 | orchestrator | Wednesday 28 May 2025 17:10:18 +0000 (0:00:00.194) 0:00:26.184 ********* 2025-05-28 17:10:18.551519 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_eb048e8a-8419-4b97-a2c5-865582781a7c) 2025-05-28 17:10:18.552226 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_eb048e8a-8419-4b97-a2c5-865582781a7c) 2025-05-28 17:10:18.553231 | orchestrator | 2025-05-28 17:10:18.553940 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:10:18.554604 | orchestrator | Wednesday 28 May 2025 17:10:18 +0000 (0:00:00.392) 0:00:26.576 ********* 2025-05-28 17:10:18.982183 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_0444fcd6-ace4-41be-a60f-d61a86741ad0) 2025-05-28 17:10:18.982312 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_0444fcd6-ace4-41be-a60f-d61a86741ad0) 2025-05-28 17:10:18.983662 | orchestrator | 2025-05-28 17:10:18.984718 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:10:18.985465 | orchestrator | Wednesday 28 May 2025 17:10:18 +0000 (0:00:00.427) 0:00:27.004 ********* 2025-05-28 17:10:19.397079 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_d5a98c17-e489-4dc0-a000-f021a8d49d4d) 2025-05-28 17:10:19.397711 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_d5a98c17-e489-4dc0-a000-f021a8d49d4d) 2025-05-28 17:10:19.401065 | orchestrator | 2025-05-28 17:10:19.401939 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:10:19.403036 | orchestrator | Wednesday 28 May 2025 17:10:19 +0000 (0:00:00.417) 0:00:27.422 ********* 2025-05-28 17:10:19.809583 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_c3ba669b-02ce-4ac9-8d34-f5b1bbc1f6b4) 2025-05-28 17:10:19.810354 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_c3ba669b-02ce-4ac9-8d34-f5b1bbc1f6b4) 2025-05-28 17:10:19.811479 | orchestrator | 2025-05-28 17:10:19.812496 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:10:19.813351 | orchestrator | Wednesday 28 May 2025 17:10:19 +0000 (0:00:00.413) 0:00:27.835 ********* 2025-05-28 17:10:20.129504 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-28 17:10:20.129734 | orchestrator | 2025-05-28 17:10:20.129763 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:10:20.130584 | orchestrator | Wednesday 28 May 2025 17:10:20 +0000 (0:00:00.320) 0:00:28.156 ********* 2025-05-28 17:10:20.699782 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-05-28 17:10:20.701391 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-05-28 17:10:20.703331 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-05-28 17:10:20.704071 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-05-28 17:10:20.705392 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-05-28 17:10:20.706449 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-05-28 17:10:20.706673 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-05-28 17:10:20.707238 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-05-28 17:10:20.707706 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-05-28 17:10:20.708148 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-05-28 17:10:20.708546 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-05-28 17:10:20.708970 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-05-28 17:10:20.709390 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-05-28 17:10:20.709821 | orchestrator | 2025-05-28 17:10:20.710272 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:10:20.710607 | orchestrator | Wednesday 28 May 2025 17:10:20 +0000 (0:00:00.570) 0:00:28.727 ********* 2025-05-28 17:10:20.904657 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:10:20.904993 | orchestrator | 2025-05-28 17:10:20.905394 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:10:20.906689 | orchestrator | Wednesday 28 May 2025 17:10:20 +0000 (0:00:00.203) 0:00:28.930 ********* 2025-05-28 17:10:21.095473 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:10:21.095651 | orchestrator | 2025-05-28 17:10:21.095937 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:10:21.096528 | orchestrator | Wednesday 28 May 2025 17:10:21 +0000 (0:00:00.192) 0:00:29.123 ********* 2025-05-28 17:10:21.296493 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:10:21.297008 | orchestrator | 2025-05-28 17:10:21.297971 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:10:21.298610 | orchestrator | Wednesday 28 May 2025 17:10:21 +0000 (0:00:00.200) 0:00:29.324 ********* 2025-05-28 17:10:21.485543 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:10:21.486085 | orchestrator | 2025-05-28 17:10:21.487405 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:10:21.489129 | orchestrator | Wednesday 28 May 2025 17:10:21 +0000 (0:00:00.189) 0:00:29.513 ********* 2025-05-28 17:10:21.702147 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:10:21.702780 | orchestrator | 2025-05-28 17:10:21.703227 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:10:21.704016 | orchestrator | Wednesday 28 May 2025 17:10:21 +0000 (0:00:00.217) 0:00:29.730 ********* 2025-05-28 17:10:21.880388 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:10:21.880915 | orchestrator | 2025-05-28 17:10:21.881873 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:10:21.884562 | orchestrator | Wednesday 28 May 2025 17:10:21 +0000 (0:00:00.176) 0:00:29.907 ********* 2025-05-28 17:10:22.060943 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:10:22.061735 | orchestrator | 2025-05-28 17:10:22.062897 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:10:22.063817 | orchestrator | Wednesday 28 May 2025 17:10:22 +0000 (0:00:00.181) 0:00:30.088 ********* 2025-05-28 17:10:22.253803 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:10:22.254824 | orchestrator | 2025-05-28 17:10:22.256341 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:10:22.257091 | orchestrator | Wednesday 28 May 2025 17:10:22 +0000 (0:00:00.193) 0:00:30.281 ********* 2025-05-28 17:10:23.051697 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-05-28 17:10:23.053394 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-05-28 17:10:23.053847 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-05-28 17:10:23.053871 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-05-28 17:10:23.054876 | orchestrator | 2025-05-28 17:10:23.055552 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:10:23.056284 | orchestrator | Wednesday 28 May 2025 17:10:23 +0000 (0:00:00.795) 0:00:31.077 ********* 2025-05-28 17:10:23.237684 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:10:23.239406 | orchestrator | 2025-05-28 17:10:23.240250 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:10:23.241825 | orchestrator | Wednesday 28 May 2025 17:10:23 +0000 (0:00:00.187) 0:00:31.265 ********* 2025-05-28 17:10:23.427897 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:10:23.428576 | orchestrator | 2025-05-28 17:10:23.429260 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:10:23.430381 | orchestrator | Wednesday 28 May 2025 17:10:23 +0000 (0:00:00.189) 0:00:31.454 ********* 2025-05-28 17:10:24.013849 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:10:24.014624 | orchestrator | 2025-05-28 17:10:24.015793 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:10:24.016772 | orchestrator | Wednesday 28 May 2025 17:10:24 +0000 (0:00:00.586) 0:00:32.041 ********* 2025-05-28 17:10:24.216618 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:10:24.217650 | orchestrator | 2025-05-28 17:10:24.218669 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-05-28 17:10:24.219884 | orchestrator | Wednesday 28 May 2025 17:10:24 +0000 (0:00:00.201) 0:00:32.242 ********* 2025-05-28 17:10:24.353617 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:10:24.353789 | orchestrator | 2025-05-28 17:10:24.354644 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-05-28 17:10:24.356522 | orchestrator | Wednesday 28 May 2025 17:10:24 +0000 (0:00:00.137) 0:00:32.380 ********* 2025-05-28 17:10:24.540281 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b5b3f734-7a3a-56eb-b9e1-00e08c7f7e25'}}) 2025-05-28 17:10:24.540449 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7e811d1b-ccc9-571e-beba-983efbae239d'}}) 2025-05-28 17:10:24.541441 | orchestrator | 2025-05-28 17:10:24.542801 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-05-28 17:10:24.543539 | orchestrator | Wednesday 28 May 2025 17:10:24 +0000 (0:00:00.186) 0:00:32.567 ********* 2025-05-28 17:10:26.355076 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-b5b3f734-7a3a-56eb-b9e1-00e08c7f7e25', 'data_vg': 'ceph-b5b3f734-7a3a-56eb-b9e1-00e08c7f7e25'}) 2025-05-28 17:10:26.355478 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-7e811d1b-ccc9-571e-beba-983efbae239d', 'data_vg': 'ceph-7e811d1b-ccc9-571e-beba-983efbae239d'}) 2025-05-28 17:10:26.357138 | orchestrator | 2025-05-28 17:10:26.357989 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-05-28 17:10:26.358856 | orchestrator | Wednesday 28 May 2025 17:10:26 +0000 (0:00:01.813) 0:00:34.380 ********* 2025-05-28 17:10:26.505559 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b5b3f734-7a3a-56eb-b9e1-00e08c7f7e25', 'data_vg': 'ceph-b5b3f734-7a3a-56eb-b9e1-00e08c7f7e25'})  2025-05-28 17:10:26.507147 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7e811d1b-ccc9-571e-beba-983efbae239d', 'data_vg': 'ceph-7e811d1b-ccc9-571e-beba-983efbae239d'})  2025-05-28 17:10:26.508093 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:10:26.510136 | orchestrator | 2025-05-28 17:10:26.510623 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-05-28 17:10:26.511919 | orchestrator | Wednesday 28 May 2025 17:10:26 +0000 (0:00:00.152) 0:00:34.533 ********* 2025-05-28 17:10:27.796498 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-b5b3f734-7a3a-56eb-b9e1-00e08c7f7e25', 'data_vg': 'ceph-b5b3f734-7a3a-56eb-b9e1-00e08c7f7e25'}) 2025-05-28 17:10:27.797157 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-7e811d1b-ccc9-571e-beba-983efbae239d', 'data_vg': 'ceph-7e811d1b-ccc9-571e-beba-983efbae239d'}) 2025-05-28 17:10:27.798199 | orchestrator | 2025-05-28 17:10:27.799302 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-05-28 17:10:27.800360 | orchestrator | Wednesday 28 May 2025 17:10:27 +0000 (0:00:01.286) 0:00:35.820 ********* 2025-05-28 17:10:27.945339 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b5b3f734-7a3a-56eb-b9e1-00e08c7f7e25', 'data_vg': 'ceph-b5b3f734-7a3a-56eb-b9e1-00e08c7f7e25'})  2025-05-28 17:10:27.945761 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7e811d1b-ccc9-571e-beba-983efbae239d', 'data_vg': 'ceph-7e811d1b-ccc9-571e-beba-983efbae239d'})  2025-05-28 17:10:27.946713 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:10:27.947471 | orchestrator | 2025-05-28 17:10:27.948292 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-05-28 17:10:27.949384 | orchestrator | Wednesday 28 May 2025 17:10:27 +0000 (0:00:00.153) 0:00:35.973 ********* 2025-05-28 17:10:28.077047 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:10:28.078412 | orchestrator | 2025-05-28 17:10:28.080309 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-05-28 17:10:28.080335 | orchestrator | Wednesday 28 May 2025 17:10:28 +0000 (0:00:00.130) 0:00:36.104 ********* 2025-05-28 17:10:28.224065 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b5b3f734-7a3a-56eb-b9e1-00e08c7f7e25', 'data_vg': 'ceph-b5b3f734-7a3a-56eb-b9e1-00e08c7f7e25'})  2025-05-28 17:10:28.225585 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7e811d1b-ccc9-571e-beba-983efbae239d', 'data_vg': 'ceph-7e811d1b-ccc9-571e-beba-983efbae239d'})  2025-05-28 17:10:28.227744 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:10:28.228178 | orchestrator | 2025-05-28 17:10:28.229416 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-05-28 17:10:28.230264 | orchestrator | Wednesday 28 May 2025 17:10:28 +0000 (0:00:00.146) 0:00:36.250 ********* 2025-05-28 17:10:28.359254 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:10:28.359863 | orchestrator | 2025-05-28 17:10:28.360469 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-05-28 17:10:28.362211 | orchestrator | Wednesday 28 May 2025 17:10:28 +0000 (0:00:00.135) 0:00:36.386 ********* 2025-05-28 17:10:28.509176 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b5b3f734-7a3a-56eb-b9e1-00e08c7f7e25', 'data_vg': 'ceph-b5b3f734-7a3a-56eb-b9e1-00e08c7f7e25'})  2025-05-28 17:10:28.509775 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7e811d1b-ccc9-571e-beba-983efbae239d', 'data_vg': 'ceph-7e811d1b-ccc9-571e-beba-983efbae239d'})  2025-05-28 17:10:28.510801 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:10:28.511391 | orchestrator | 2025-05-28 17:10:28.511874 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-05-28 17:10:28.512770 | orchestrator | Wednesday 28 May 2025 17:10:28 +0000 (0:00:00.148) 0:00:36.535 ********* 2025-05-28 17:10:28.830404 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:10:28.830996 | orchestrator | 2025-05-28 17:10:28.836732 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-05-28 17:10:28.836771 | orchestrator | Wednesday 28 May 2025 17:10:28 +0000 (0:00:00.323) 0:00:36.858 ********* 2025-05-28 17:10:28.976784 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b5b3f734-7a3a-56eb-b9e1-00e08c7f7e25', 'data_vg': 'ceph-b5b3f734-7a3a-56eb-b9e1-00e08c7f7e25'})  2025-05-28 17:10:28.977474 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7e811d1b-ccc9-571e-beba-983efbae239d', 'data_vg': 'ceph-7e811d1b-ccc9-571e-beba-983efbae239d'})  2025-05-28 17:10:28.978753 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:10:28.979640 | orchestrator | 2025-05-28 17:10:28.980858 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-05-28 17:10:28.981535 | orchestrator | Wednesday 28 May 2025 17:10:28 +0000 (0:00:00.146) 0:00:37.004 ********* 2025-05-28 17:10:29.101957 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:10:29.102319 | orchestrator | 2025-05-28 17:10:29.103291 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-05-28 17:10:29.104709 | orchestrator | Wednesday 28 May 2025 17:10:29 +0000 (0:00:00.125) 0:00:37.129 ********* 2025-05-28 17:10:29.251539 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b5b3f734-7a3a-56eb-b9e1-00e08c7f7e25', 'data_vg': 'ceph-b5b3f734-7a3a-56eb-b9e1-00e08c7f7e25'})  2025-05-28 17:10:29.251915 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7e811d1b-ccc9-571e-beba-983efbae239d', 'data_vg': 'ceph-7e811d1b-ccc9-571e-beba-983efbae239d'})  2025-05-28 17:10:29.253384 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:10:29.255278 | orchestrator | 2025-05-28 17:10:29.255305 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-05-28 17:10:29.256133 | orchestrator | Wednesday 28 May 2025 17:10:29 +0000 (0:00:00.149) 0:00:37.279 ********* 2025-05-28 17:10:29.397295 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b5b3f734-7a3a-56eb-b9e1-00e08c7f7e25', 'data_vg': 'ceph-b5b3f734-7a3a-56eb-b9e1-00e08c7f7e25'})  2025-05-28 17:10:29.397701 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7e811d1b-ccc9-571e-beba-983efbae239d', 'data_vg': 'ceph-7e811d1b-ccc9-571e-beba-983efbae239d'})  2025-05-28 17:10:29.398692 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:10:29.399255 | orchestrator | 2025-05-28 17:10:29.401577 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-05-28 17:10:29.401668 | orchestrator | Wednesday 28 May 2025 17:10:29 +0000 (0:00:00.145) 0:00:37.424 ********* 2025-05-28 17:10:29.552924 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b5b3f734-7a3a-56eb-b9e1-00e08c7f7e25', 'data_vg': 'ceph-b5b3f734-7a3a-56eb-b9e1-00e08c7f7e25'})  2025-05-28 17:10:29.553251 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7e811d1b-ccc9-571e-beba-983efbae239d', 'data_vg': 'ceph-7e811d1b-ccc9-571e-beba-983efbae239d'})  2025-05-28 17:10:29.554897 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:10:29.556572 | orchestrator | 2025-05-28 17:10:29.556805 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-05-28 17:10:29.557767 | orchestrator | Wednesday 28 May 2025 17:10:29 +0000 (0:00:00.154) 0:00:37.579 ********* 2025-05-28 17:10:29.699340 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:10:29.700362 | orchestrator | 2025-05-28 17:10:29.701731 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-05-28 17:10:29.701862 | orchestrator | Wednesday 28 May 2025 17:10:29 +0000 (0:00:00.147) 0:00:37.726 ********* 2025-05-28 17:10:29.849775 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:10:29.851477 | orchestrator | 2025-05-28 17:10:29.853339 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-05-28 17:10:29.853608 | orchestrator | Wednesday 28 May 2025 17:10:29 +0000 (0:00:00.150) 0:00:37.877 ********* 2025-05-28 17:10:29.994076 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:10:29.995012 | orchestrator | 2025-05-28 17:10:29.996707 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-05-28 17:10:29.997901 | orchestrator | Wednesday 28 May 2025 17:10:29 +0000 (0:00:00.143) 0:00:38.021 ********* 2025-05-28 17:10:30.130885 | orchestrator | ok: [testbed-node-4] => { 2025-05-28 17:10:30.131887 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-05-28 17:10:30.133734 | orchestrator | } 2025-05-28 17:10:30.134550 | orchestrator | 2025-05-28 17:10:30.135403 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-05-28 17:10:30.136035 | orchestrator | Wednesday 28 May 2025 17:10:30 +0000 (0:00:00.136) 0:00:38.157 ********* 2025-05-28 17:10:30.287339 | orchestrator | ok: [testbed-node-4] => { 2025-05-28 17:10:30.287985 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-05-28 17:10:30.289066 | orchestrator | } 2025-05-28 17:10:30.290616 | orchestrator | 2025-05-28 17:10:30.290985 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-05-28 17:10:30.291652 | orchestrator | Wednesday 28 May 2025 17:10:30 +0000 (0:00:00.155) 0:00:38.313 ********* 2025-05-28 17:10:30.422971 | orchestrator | ok: [testbed-node-4] => { 2025-05-28 17:10:30.424230 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-05-28 17:10:30.426925 | orchestrator | } 2025-05-28 17:10:30.427700 | orchestrator | 2025-05-28 17:10:30.428586 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-05-28 17:10:30.429510 | orchestrator | Wednesday 28 May 2025 17:10:30 +0000 (0:00:00.136) 0:00:38.450 ********* 2025-05-28 17:10:31.121223 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:10:31.123169 | orchestrator | 2025-05-28 17:10:31.125751 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-05-28 17:10:31.126607 | orchestrator | Wednesday 28 May 2025 17:10:31 +0000 (0:00:00.698) 0:00:39.148 ********* 2025-05-28 17:10:31.652973 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:10:31.654357 | orchestrator | 2025-05-28 17:10:31.654563 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-05-28 17:10:31.655501 | orchestrator | Wednesday 28 May 2025 17:10:31 +0000 (0:00:00.531) 0:00:39.679 ********* 2025-05-28 17:10:32.218171 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:10:32.218289 | orchestrator | 2025-05-28 17:10:32.218570 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-05-28 17:10:32.218960 | orchestrator | Wednesday 28 May 2025 17:10:32 +0000 (0:00:00.566) 0:00:40.245 ********* 2025-05-28 17:10:32.354261 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:10:32.354745 | orchestrator | 2025-05-28 17:10:32.355939 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-05-28 17:10:32.357195 | orchestrator | Wednesday 28 May 2025 17:10:32 +0000 (0:00:00.134) 0:00:40.380 ********* 2025-05-28 17:10:32.470114 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:10:32.470594 | orchestrator | 2025-05-28 17:10:32.471567 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-05-28 17:10:32.474242 | orchestrator | Wednesday 28 May 2025 17:10:32 +0000 (0:00:00.117) 0:00:40.497 ********* 2025-05-28 17:10:32.581303 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:10:32.581756 | orchestrator | 2025-05-28 17:10:32.582906 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-05-28 17:10:32.586529 | orchestrator | Wednesday 28 May 2025 17:10:32 +0000 (0:00:00.109) 0:00:40.607 ********* 2025-05-28 17:10:32.719830 | orchestrator | ok: [testbed-node-4] => { 2025-05-28 17:10:32.720638 | orchestrator |  "vgs_report": { 2025-05-28 17:10:32.721631 | orchestrator |  "vg": [] 2025-05-28 17:10:32.722562 | orchestrator |  } 2025-05-28 17:10:32.723863 | orchestrator | } 2025-05-28 17:10:32.724406 | orchestrator | 2025-05-28 17:10:32.725302 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-05-28 17:10:32.725676 | orchestrator | Wednesday 28 May 2025 17:10:32 +0000 (0:00:00.139) 0:00:40.746 ********* 2025-05-28 17:10:32.845552 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:10:32.845711 | orchestrator | 2025-05-28 17:10:32.845725 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-05-28 17:10:32.845738 | orchestrator | Wednesday 28 May 2025 17:10:32 +0000 (0:00:00.126) 0:00:40.872 ********* 2025-05-28 17:10:32.962355 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:10:32.962929 | orchestrator | 2025-05-28 17:10:32.962963 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-05-28 17:10:32.963849 | orchestrator | Wednesday 28 May 2025 17:10:32 +0000 (0:00:00.115) 0:00:40.988 ********* 2025-05-28 17:10:33.087907 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:10:33.088038 | orchestrator | 2025-05-28 17:10:33.088186 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-05-28 17:10:33.088594 | orchestrator | Wednesday 28 May 2025 17:10:33 +0000 (0:00:00.126) 0:00:41.115 ********* 2025-05-28 17:10:33.215181 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:10:33.216351 | orchestrator | 2025-05-28 17:10:33.217951 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-05-28 17:10:33.217977 | orchestrator | Wednesday 28 May 2025 17:10:33 +0000 (0:00:00.127) 0:00:41.242 ********* 2025-05-28 17:10:33.350730 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:10:33.351402 | orchestrator | 2025-05-28 17:10:33.353008 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-05-28 17:10:33.354301 | orchestrator | Wednesday 28 May 2025 17:10:33 +0000 (0:00:00.135) 0:00:41.378 ********* 2025-05-28 17:10:33.694517 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:10:33.694635 | orchestrator | 2025-05-28 17:10:33.694650 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-05-28 17:10:33.694998 | orchestrator | Wednesday 28 May 2025 17:10:33 +0000 (0:00:00.339) 0:00:41.717 ********* 2025-05-28 17:10:33.823697 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:10:33.824286 | orchestrator | 2025-05-28 17:10:33.825524 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-05-28 17:10:33.826367 | orchestrator | Wednesday 28 May 2025 17:10:33 +0000 (0:00:00.131) 0:00:41.849 ********* 2025-05-28 17:10:33.957828 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:10:33.957949 | orchestrator | 2025-05-28 17:10:33.957966 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-05-28 17:10:33.958797 | orchestrator | Wednesday 28 May 2025 17:10:33 +0000 (0:00:00.132) 0:00:41.981 ********* 2025-05-28 17:10:34.088658 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:10:34.088786 | orchestrator | 2025-05-28 17:10:34.090358 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-05-28 17:10:34.091484 | orchestrator | Wednesday 28 May 2025 17:10:34 +0000 (0:00:00.131) 0:00:42.113 ********* 2025-05-28 17:10:34.218435 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:10:34.219305 | orchestrator | 2025-05-28 17:10:34.220442 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-05-28 17:10:34.221299 | orchestrator | Wednesday 28 May 2025 17:10:34 +0000 (0:00:00.129) 0:00:42.243 ********* 2025-05-28 17:10:34.345838 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:10:34.346552 | orchestrator | 2025-05-28 17:10:34.347417 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-05-28 17:10:34.348211 | orchestrator | Wednesday 28 May 2025 17:10:34 +0000 (0:00:00.129) 0:00:42.372 ********* 2025-05-28 17:10:34.486635 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:10:34.486753 | orchestrator | 2025-05-28 17:10:34.487747 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-05-28 17:10:34.488992 | orchestrator | Wednesday 28 May 2025 17:10:34 +0000 (0:00:00.139) 0:00:42.511 ********* 2025-05-28 17:10:34.621145 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:10:34.622410 | orchestrator | 2025-05-28 17:10:34.625438 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-05-28 17:10:34.625955 | orchestrator | Wednesday 28 May 2025 17:10:34 +0000 (0:00:00.133) 0:00:42.645 ********* 2025-05-28 17:10:34.750986 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:10:34.753624 | orchestrator | 2025-05-28 17:10:34.754895 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-05-28 17:10:34.755895 | orchestrator | Wednesday 28 May 2025 17:10:34 +0000 (0:00:00.131) 0:00:42.776 ********* 2025-05-28 17:10:34.900978 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b5b3f734-7a3a-56eb-b9e1-00e08c7f7e25', 'data_vg': 'ceph-b5b3f734-7a3a-56eb-b9e1-00e08c7f7e25'})  2025-05-28 17:10:34.902656 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7e811d1b-ccc9-571e-beba-983efbae239d', 'data_vg': 'ceph-7e811d1b-ccc9-571e-beba-983efbae239d'})  2025-05-28 17:10:34.903452 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:10:34.904848 | orchestrator | 2025-05-28 17:10:34.906170 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-05-28 17:10:34.907627 | orchestrator | Wednesday 28 May 2025 17:10:34 +0000 (0:00:00.151) 0:00:42.928 ********* 2025-05-28 17:10:35.048598 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b5b3f734-7a3a-56eb-b9e1-00e08c7f7e25', 'data_vg': 'ceph-b5b3f734-7a3a-56eb-b9e1-00e08c7f7e25'})  2025-05-28 17:10:35.049319 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7e811d1b-ccc9-571e-beba-983efbae239d', 'data_vg': 'ceph-7e811d1b-ccc9-571e-beba-983efbae239d'})  2025-05-28 17:10:35.050385 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:10:35.051962 | orchestrator | 2025-05-28 17:10:35.052467 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-05-28 17:10:35.053299 | orchestrator | Wednesday 28 May 2025 17:10:35 +0000 (0:00:00.147) 0:00:43.076 ********* 2025-05-28 17:10:35.192786 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b5b3f734-7a3a-56eb-b9e1-00e08c7f7e25', 'data_vg': 'ceph-b5b3f734-7a3a-56eb-b9e1-00e08c7f7e25'})  2025-05-28 17:10:35.193035 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7e811d1b-ccc9-571e-beba-983efbae239d', 'data_vg': 'ceph-7e811d1b-ccc9-571e-beba-983efbae239d'})  2025-05-28 17:10:35.194463 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:10:35.195311 | orchestrator | 2025-05-28 17:10:35.196096 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-05-28 17:10:35.197357 | orchestrator | Wednesday 28 May 2025 17:10:35 +0000 (0:00:00.143) 0:00:43.219 ********* 2025-05-28 17:10:35.541360 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b5b3f734-7a3a-56eb-b9e1-00e08c7f7e25', 'data_vg': 'ceph-b5b3f734-7a3a-56eb-b9e1-00e08c7f7e25'})  2025-05-28 17:10:35.541581 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7e811d1b-ccc9-571e-beba-983efbae239d', 'data_vg': 'ceph-7e811d1b-ccc9-571e-beba-983efbae239d'})  2025-05-28 17:10:35.541851 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:10:35.542656 | orchestrator | 2025-05-28 17:10:35.544227 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-05-28 17:10:35.544521 | orchestrator | Wednesday 28 May 2025 17:10:35 +0000 (0:00:00.347) 0:00:43.567 ********* 2025-05-28 17:10:35.694129 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b5b3f734-7a3a-56eb-b9e1-00e08c7f7e25', 'data_vg': 'ceph-b5b3f734-7a3a-56eb-b9e1-00e08c7f7e25'})  2025-05-28 17:10:35.694821 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7e811d1b-ccc9-571e-beba-983efbae239d', 'data_vg': 'ceph-7e811d1b-ccc9-571e-beba-983efbae239d'})  2025-05-28 17:10:35.696493 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:10:35.697386 | orchestrator | 2025-05-28 17:10:35.698526 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-05-28 17:10:35.699208 | orchestrator | Wednesday 28 May 2025 17:10:35 +0000 (0:00:00.153) 0:00:43.720 ********* 2025-05-28 17:10:35.841880 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b5b3f734-7a3a-56eb-b9e1-00e08c7f7e25', 'data_vg': 'ceph-b5b3f734-7a3a-56eb-b9e1-00e08c7f7e25'})  2025-05-28 17:10:35.842820 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7e811d1b-ccc9-571e-beba-983efbae239d', 'data_vg': 'ceph-7e811d1b-ccc9-571e-beba-983efbae239d'})  2025-05-28 17:10:35.844336 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:10:35.844798 | orchestrator | 2025-05-28 17:10:35.845564 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-05-28 17:10:35.846151 | orchestrator | Wednesday 28 May 2025 17:10:35 +0000 (0:00:00.149) 0:00:43.869 ********* 2025-05-28 17:10:35.992880 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b5b3f734-7a3a-56eb-b9e1-00e08c7f7e25', 'data_vg': 'ceph-b5b3f734-7a3a-56eb-b9e1-00e08c7f7e25'})  2025-05-28 17:10:35.993505 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7e811d1b-ccc9-571e-beba-983efbae239d', 'data_vg': 'ceph-7e811d1b-ccc9-571e-beba-983efbae239d'})  2025-05-28 17:10:35.994241 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:10:35.995148 | orchestrator | 2025-05-28 17:10:35.995812 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-05-28 17:10:35.998129 | orchestrator | Wednesday 28 May 2025 17:10:35 +0000 (0:00:00.151) 0:00:44.021 ********* 2025-05-28 17:10:36.154679 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b5b3f734-7a3a-56eb-b9e1-00e08c7f7e25', 'data_vg': 'ceph-b5b3f734-7a3a-56eb-b9e1-00e08c7f7e25'})  2025-05-28 17:10:36.155326 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7e811d1b-ccc9-571e-beba-983efbae239d', 'data_vg': 'ceph-7e811d1b-ccc9-571e-beba-983efbae239d'})  2025-05-28 17:10:36.155512 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:10:36.156492 | orchestrator | 2025-05-28 17:10:36.157428 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-05-28 17:10:36.157595 | orchestrator | Wednesday 28 May 2025 17:10:36 +0000 (0:00:00.161) 0:00:44.182 ********* 2025-05-28 17:10:36.661526 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:10:36.662816 | orchestrator | 2025-05-28 17:10:36.663327 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-05-28 17:10:36.664324 | orchestrator | Wednesday 28 May 2025 17:10:36 +0000 (0:00:00.504) 0:00:44.686 ********* 2025-05-28 17:10:37.167323 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:10:37.169672 | orchestrator | 2025-05-28 17:10:37.170953 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-05-28 17:10:37.171414 | orchestrator | Wednesday 28 May 2025 17:10:37 +0000 (0:00:00.508) 0:00:45.194 ********* 2025-05-28 17:10:37.309663 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:10:37.309857 | orchestrator | 2025-05-28 17:10:37.310581 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-05-28 17:10:37.310948 | orchestrator | Wednesday 28 May 2025 17:10:37 +0000 (0:00:00.142) 0:00:45.337 ********* 2025-05-28 17:10:37.472113 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-7e811d1b-ccc9-571e-beba-983efbae239d', 'vg_name': 'ceph-7e811d1b-ccc9-571e-beba-983efbae239d'}) 2025-05-28 17:10:37.472351 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-b5b3f734-7a3a-56eb-b9e1-00e08c7f7e25', 'vg_name': 'ceph-b5b3f734-7a3a-56eb-b9e1-00e08c7f7e25'}) 2025-05-28 17:10:37.472414 | orchestrator | 2025-05-28 17:10:37.472704 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-05-28 17:10:37.474360 | orchestrator | Wednesday 28 May 2025 17:10:37 +0000 (0:00:00.161) 0:00:45.499 ********* 2025-05-28 17:10:37.626912 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b5b3f734-7a3a-56eb-b9e1-00e08c7f7e25', 'data_vg': 'ceph-b5b3f734-7a3a-56eb-b9e1-00e08c7f7e25'})  2025-05-28 17:10:37.627589 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7e811d1b-ccc9-571e-beba-983efbae239d', 'data_vg': 'ceph-7e811d1b-ccc9-571e-beba-983efbae239d'})  2025-05-28 17:10:37.628333 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:10:37.630393 | orchestrator | 2025-05-28 17:10:37.630412 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-05-28 17:10:37.630974 | orchestrator | Wednesday 28 May 2025 17:10:37 +0000 (0:00:00.153) 0:00:45.653 ********* 2025-05-28 17:10:37.772904 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b5b3f734-7a3a-56eb-b9e1-00e08c7f7e25', 'data_vg': 'ceph-b5b3f734-7a3a-56eb-b9e1-00e08c7f7e25'})  2025-05-28 17:10:37.773348 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7e811d1b-ccc9-571e-beba-983efbae239d', 'data_vg': 'ceph-7e811d1b-ccc9-571e-beba-983efbae239d'})  2025-05-28 17:10:37.775401 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:10:37.775436 | orchestrator | 2025-05-28 17:10:37.775809 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-05-28 17:10:37.776570 | orchestrator | Wednesday 28 May 2025 17:10:37 +0000 (0:00:00.145) 0:00:45.799 ********* 2025-05-28 17:10:37.921842 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b5b3f734-7a3a-56eb-b9e1-00e08c7f7e25', 'data_vg': 'ceph-b5b3f734-7a3a-56eb-b9e1-00e08c7f7e25'})  2025-05-28 17:10:37.922531 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7e811d1b-ccc9-571e-beba-983efbae239d', 'data_vg': 'ceph-7e811d1b-ccc9-571e-beba-983efbae239d'})  2025-05-28 17:10:37.923229 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:10:37.924537 | orchestrator | 2025-05-28 17:10:37.925188 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-05-28 17:10:37.925944 | orchestrator | Wednesday 28 May 2025 17:10:37 +0000 (0:00:00.150) 0:00:45.949 ********* 2025-05-28 17:10:38.386286 | orchestrator | ok: [testbed-node-4] => { 2025-05-28 17:10:38.388182 | orchestrator |  "lvm_report": { 2025-05-28 17:10:38.389810 | orchestrator |  "lv": [ 2025-05-28 17:10:38.392372 | orchestrator |  { 2025-05-28 17:10:38.392779 | orchestrator |  "lv_name": "osd-block-7e811d1b-ccc9-571e-beba-983efbae239d", 2025-05-28 17:10:38.393411 | orchestrator |  "vg_name": "ceph-7e811d1b-ccc9-571e-beba-983efbae239d" 2025-05-28 17:10:38.393879 | orchestrator |  }, 2025-05-28 17:10:38.394385 | orchestrator |  { 2025-05-28 17:10:38.394885 | orchestrator |  "lv_name": "osd-block-b5b3f734-7a3a-56eb-b9e1-00e08c7f7e25", 2025-05-28 17:10:38.395257 | orchestrator |  "vg_name": "ceph-b5b3f734-7a3a-56eb-b9e1-00e08c7f7e25" 2025-05-28 17:10:38.395617 | orchestrator |  } 2025-05-28 17:10:38.396190 | orchestrator |  ], 2025-05-28 17:10:38.396407 | orchestrator |  "pv": [ 2025-05-28 17:10:38.396771 | orchestrator |  { 2025-05-28 17:10:38.397208 | orchestrator |  "pv_name": "/dev/sdb", 2025-05-28 17:10:38.397703 | orchestrator |  "vg_name": "ceph-b5b3f734-7a3a-56eb-b9e1-00e08c7f7e25" 2025-05-28 17:10:38.398135 | orchestrator |  }, 2025-05-28 17:10:38.398507 | orchestrator |  { 2025-05-28 17:10:38.398893 | orchestrator |  "pv_name": "/dev/sdc", 2025-05-28 17:10:38.399327 | orchestrator |  "vg_name": "ceph-7e811d1b-ccc9-571e-beba-983efbae239d" 2025-05-28 17:10:38.399621 | orchestrator |  } 2025-05-28 17:10:38.400104 | orchestrator |  ] 2025-05-28 17:10:38.400397 | orchestrator |  } 2025-05-28 17:10:38.400750 | orchestrator | } 2025-05-28 17:10:38.401208 | orchestrator | 2025-05-28 17:10:38.401921 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-05-28 17:10:38.402251 | orchestrator | 2025-05-28 17:10:38.402419 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-28 17:10:38.402705 | orchestrator | Wednesday 28 May 2025 17:10:38 +0000 (0:00:00.464) 0:00:46.413 ********* 2025-05-28 17:10:38.624774 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-05-28 17:10:38.625534 | orchestrator | 2025-05-28 17:10:38.625571 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-28 17:10:38.626182 | orchestrator | Wednesday 28 May 2025 17:10:38 +0000 (0:00:00.236) 0:00:46.650 ********* 2025-05-28 17:10:38.847205 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:10:38.847920 | orchestrator | 2025-05-28 17:10:38.852660 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:10:38.852690 | orchestrator | Wednesday 28 May 2025 17:10:38 +0000 (0:00:00.223) 0:00:46.874 ********* 2025-05-28 17:10:39.225972 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-05-28 17:10:39.226159 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-05-28 17:10:39.226173 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-05-28 17:10:39.226637 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-05-28 17:10:39.228139 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-05-28 17:10:39.229277 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-05-28 17:10:39.230644 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-05-28 17:10:39.231717 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-05-28 17:10:39.233201 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-05-28 17:10:39.233294 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-05-28 17:10:39.234132 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-05-28 17:10:39.234789 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-05-28 17:10:39.235578 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-05-28 17:10:39.236427 | orchestrator | 2025-05-28 17:10:39.237045 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:10:39.237814 | orchestrator | Wednesday 28 May 2025 17:10:39 +0000 (0:00:00.377) 0:00:47.251 ********* 2025-05-28 17:10:39.401296 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:10:39.403333 | orchestrator | 2025-05-28 17:10:39.403516 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:10:39.404264 | orchestrator | Wednesday 28 May 2025 17:10:39 +0000 (0:00:00.176) 0:00:47.428 ********* 2025-05-28 17:10:39.602215 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:10:39.602621 | orchestrator | 2025-05-28 17:10:39.603732 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:10:39.605343 | orchestrator | Wednesday 28 May 2025 17:10:39 +0000 (0:00:00.200) 0:00:47.629 ********* 2025-05-28 17:10:39.796055 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:10:39.796699 | orchestrator | 2025-05-28 17:10:39.797011 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:10:39.799791 | orchestrator | Wednesday 28 May 2025 17:10:39 +0000 (0:00:00.194) 0:00:47.823 ********* 2025-05-28 17:10:39.977132 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:10:39.977237 | orchestrator | 2025-05-28 17:10:39.977537 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:10:39.978202 | orchestrator | Wednesday 28 May 2025 17:10:39 +0000 (0:00:00.180) 0:00:48.004 ********* 2025-05-28 17:10:40.177789 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:10:40.179459 | orchestrator | 2025-05-28 17:10:40.180020 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:10:40.180553 | orchestrator | Wednesday 28 May 2025 17:10:40 +0000 (0:00:00.198) 0:00:48.202 ********* 2025-05-28 17:10:40.734440 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:10:40.735329 | orchestrator | 2025-05-28 17:10:40.735711 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:10:40.736658 | orchestrator | Wednesday 28 May 2025 17:10:40 +0000 (0:00:00.555) 0:00:48.758 ********* 2025-05-28 17:10:40.925249 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:10:40.925782 | orchestrator | 2025-05-28 17:10:40.926963 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:10:40.928376 | orchestrator | Wednesday 28 May 2025 17:10:40 +0000 (0:00:00.193) 0:00:48.952 ********* 2025-05-28 17:10:41.103650 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:10:41.104694 | orchestrator | 2025-05-28 17:10:41.106226 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:10:41.106937 | orchestrator | Wednesday 28 May 2025 17:10:41 +0000 (0:00:00.178) 0:00:49.130 ********* 2025-05-28 17:10:41.501917 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_536d5e59-8868-442a-b439-21fdbfcfc02f) 2025-05-28 17:10:41.502113 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_536d5e59-8868-442a-b439-21fdbfcfc02f) 2025-05-28 17:10:41.502200 | orchestrator | 2025-05-28 17:10:41.502550 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:10:41.502761 | orchestrator | Wednesday 28 May 2025 17:10:41 +0000 (0:00:00.398) 0:00:49.529 ********* 2025-05-28 17:10:41.907630 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_1369a208-db5b-4ff3-8df7-c2f8ed8178e8) 2025-05-28 17:10:41.908519 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_1369a208-db5b-4ff3-8df7-c2f8ed8178e8) 2025-05-28 17:10:41.909570 | orchestrator | 2025-05-28 17:10:41.910347 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:10:41.910731 | orchestrator | Wednesday 28 May 2025 17:10:41 +0000 (0:00:00.404) 0:00:49.934 ********* 2025-05-28 17:10:42.298656 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_3045bd6c-b8ff-4958-af32-f9dea72800f3) 2025-05-28 17:10:42.299287 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_3045bd6c-b8ff-4958-af32-f9dea72800f3) 2025-05-28 17:10:42.300099 | orchestrator | 2025-05-28 17:10:42.300837 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:10:42.301337 | orchestrator | Wednesday 28 May 2025 17:10:42 +0000 (0:00:00.392) 0:00:50.326 ********* 2025-05-28 17:10:42.688692 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_80beb2a7-6ee1-4917-8c3d-de783739f119) 2025-05-28 17:10:42.689698 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_80beb2a7-6ee1-4917-8c3d-de783739f119) 2025-05-28 17:10:42.690147 | orchestrator | 2025-05-28 17:10:42.692205 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 17:10:42.692227 | orchestrator | Wednesday 28 May 2025 17:10:42 +0000 (0:00:00.388) 0:00:50.715 ********* 2025-05-28 17:10:43.013345 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-28 17:10:43.014381 | orchestrator | 2025-05-28 17:10:43.015200 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:10:43.016284 | orchestrator | Wednesday 28 May 2025 17:10:43 +0000 (0:00:00.325) 0:00:51.040 ********* 2025-05-28 17:10:43.419112 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-05-28 17:10:43.419744 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-05-28 17:10:43.421289 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-05-28 17:10:43.421778 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-05-28 17:10:43.422321 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-05-28 17:10:43.423019 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-05-28 17:10:43.423814 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-05-28 17:10:43.424328 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-05-28 17:10:43.425005 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-05-28 17:10:43.425381 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-05-28 17:10:43.425750 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-05-28 17:10:43.426263 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-05-28 17:10:43.426561 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-05-28 17:10:43.426962 | orchestrator | 2025-05-28 17:10:43.427569 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:10:43.427876 | orchestrator | Wednesday 28 May 2025 17:10:43 +0000 (0:00:00.405) 0:00:51.446 ********* 2025-05-28 17:10:43.604384 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:10:43.604583 | orchestrator | 2025-05-28 17:10:43.604682 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:10:43.605444 | orchestrator | Wednesday 28 May 2025 17:10:43 +0000 (0:00:00.185) 0:00:51.631 ********* 2025-05-28 17:10:43.803224 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:10:43.804577 | orchestrator | 2025-05-28 17:10:43.806386 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:10:43.806527 | orchestrator | Wednesday 28 May 2025 17:10:43 +0000 (0:00:00.197) 0:00:51.829 ********* 2025-05-28 17:10:44.372814 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:10:44.373377 | orchestrator | 2025-05-28 17:10:44.373410 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:10:44.373424 | orchestrator | Wednesday 28 May 2025 17:10:44 +0000 (0:00:00.571) 0:00:52.400 ********* 2025-05-28 17:10:44.567828 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:10:44.569602 | orchestrator | 2025-05-28 17:10:44.570384 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:10:44.571471 | orchestrator | Wednesday 28 May 2025 17:10:44 +0000 (0:00:00.194) 0:00:52.595 ********* 2025-05-28 17:10:44.770293 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:10:44.770405 | orchestrator | 2025-05-28 17:10:44.771359 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:10:44.771766 | orchestrator | Wednesday 28 May 2025 17:10:44 +0000 (0:00:00.200) 0:00:52.796 ********* 2025-05-28 17:10:44.951369 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:10:44.953094 | orchestrator | 2025-05-28 17:10:44.953531 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:10:44.954318 | orchestrator | Wednesday 28 May 2025 17:10:44 +0000 (0:00:00.183) 0:00:52.979 ********* 2025-05-28 17:10:45.137109 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:10:45.137626 | orchestrator | 2025-05-28 17:10:45.138600 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:10:45.139028 | orchestrator | Wednesday 28 May 2025 17:10:45 +0000 (0:00:00.186) 0:00:53.165 ********* 2025-05-28 17:10:45.326201 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:10:45.327320 | orchestrator | 2025-05-28 17:10:45.328312 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:10:45.329638 | orchestrator | Wednesday 28 May 2025 17:10:45 +0000 (0:00:00.188) 0:00:53.354 ********* 2025-05-28 17:10:45.939315 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-05-28 17:10:45.939670 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-05-28 17:10:45.941348 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-05-28 17:10:45.942272 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-05-28 17:10:45.942942 | orchestrator | 2025-05-28 17:10:45.943661 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:10:45.944273 | orchestrator | Wednesday 28 May 2025 17:10:45 +0000 (0:00:00.610) 0:00:53.964 ********* 2025-05-28 17:10:46.138379 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:10:46.139901 | orchestrator | 2025-05-28 17:10:46.139921 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:10:46.139954 | orchestrator | Wednesday 28 May 2025 17:10:46 +0000 (0:00:00.195) 0:00:54.160 ********* 2025-05-28 17:10:46.335907 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:10:46.336474 | orchestrator | 2025-05-28 17:10:46.337868 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:10:46.339328 | orchestrator | Wednesday 28 May 2025 17:10:46 +0000 (0:00:00.203) 0:00:54.363 ********* 2025-05-28 17:10:46.527777 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:10:46.528138 | orchestrator | 2025-05-28 17:10:46.529167 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 17:10:46.530873 | orchestrator | Wednesday 28 May 2025 17:10:46 +0000 (0:00:00.190) 0:00:54.554 ********* 2025-05-28 17:10:46.713696 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:10:46.713889 | orchestrator | 2025-05-28 17:10:46.714709 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-05-28 17:10:46.716409 | orchestrator | Wednesday 28 May 2025 17:10:46 +0000 (0:00:00.186) 0:00:54.740 ********* 2025-05-28 17:10:47.062678 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:10:47.063199 | orchestrator | 2025-05-28 17:10:47.063677 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-05-28 17:10:47.064359 | orchestrator | Wednesday 28 May 2025 17:10:47 +0000 (0:00:00.348) 0:00:55.089 ********* 2025-05-28 17:10:47.237485 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '91f15584-1a8a-582b-a00a-c533bea87f37'}}) 2025-05-28 17:10:47.238782 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd85522ca-9ab4-5810-aefe-18d74b0f7dbe'}}) 2025-05-28 17:10:47.239805 | orchestrator | 2025-05-28 17:10:47.241247 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-05-28 17:10:47.241676 | orchestrator | Wednesday 28 May 2025 17:10:47 +0000 (0:00:00.175) 0:00:55.265 ********* 2025-05-28 17:10:49.054606 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-91f15584-1a8a-582b-a00a-c533bea87f37', 'data_vg': 'ceph-91f15584-1a8a-582b-a00a-c533bea87f37'}) 2025-05-28 17:10:49.054843 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-d85522ca-9ab4-5810-aefe-18d74b0f7dbe', 'data_vg': 'ceph-d85522ca-9ab4-5810-aefe-18d74b0f7dbe'}) 2025-05-28 17:10:49.054864 | orchestrator | 2025-05-28 17:10:49.054878 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-05-28 17:10:49.056812 | orchestrator | Wednesday 28 May 2025 17:10:49 +0000 (0:00:01.814) 0:00:57.079 ********* 2025-05-28 17:10:49.202296 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-91f15584-1a8a-582b-a00a-c533bea87f37', 'data_vg': 'ceph-91f15584-1a8a-582b-a00a-c533bea87f37'})  2025-05-28 17:10:49.202431 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d85522ca-9ab4-5810-aefe-18d74b0f7dbe', 'data_vg': 'ceph-d85522ca-9ab4-5810-aefe-18d74b0f7dbe'})  2025-05-28 17:10:49.202445 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:10:49.202456 | orchestrator | 2025-05-28 17:10:49.202467 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-05-28 17:10:49.202478 | orchestrator | Wednesday 28 May 2025 17:10:49 +0000 (0:00:00.145) 0:00:57.225 ********* 2025-05-28 17:10:50.523544 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-91f15584-1a8a-582b-a00a-c533bea87f37', 'data_vg': 'ceph-91f15584-1a8a-582b-a00a-c533bea87f37'}) 2025-05-28 17:10:50.524399 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-d85522ca-9ab4-5810-aefe-18d74b0f7dbe', 'data_vg': 'ceph-d85522ca-9ab4-5810-aefe-18d74b0f7dbe'}) 2025-05-28 17:10:50.525338 | orchestrator | 2025-05-28 17:10:50.526176 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-05-28 17:10:50.527157 | orchestrator | Wednesday 28 May 2025 17:10:50 +0000 (0:00:01.324) 0:00:58.550 ********* 2025-05-28 17:10:50.680421 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-91f15584-1a8a-582b-a00a-c533bea87f37', 'data_vg': 'ceph-91f15584-1a8a-582b-a00a-c533bea87f37'})  2025-05-28 17:10:50.680642 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d85522ca-9ab4-5810-aefe-18d74b0f7dbe', 'data_vg': 'ceph-d85522ca-9ab4-5810-aefe-18d74b0f7dbe'})  2025-05-28 17:10:50.681690 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:10:50.683268 | orchestrator | 2025-05-28 17:10:50.684973 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-05-28 17:10:50.685008 | orchestrator | Wednesday 28 May 2025 17:10:50 +0000 (0:00:00.157) 0:00:58.707 ********* 2025-05-28 17:10:50.814687 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:10:50.815736 | orchestrator | 2025-05-28 17:10:50.816787 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-05-28 17:10:50.817482 | orchestrator | Wednesday 28 May 2025 17:10:50 +0000 (0:00:00.133) 0:00:58.841 ********* 2025-05-28 17:10:50.949915 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-91f15584-1a8a-582b-a00a-c533bea87f37', 'data_vg': 'ceph-91f15584-1a8a-582b-a00a-c533bea87f37'})  2025-05-28 17:10:50.951007 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d85522ca-9ab4-5810-aefe-18d74b0f7dbe', 'data_vg': 'ceph-d85522ca-9ab4-5810-aefe-18d74b0f7dbe'})  2025-05-28 17:10:50.952422 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:10:50.953503 | orchestrator | 2025-05-28 17:10:50.954700 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-05-28 17:10:50.955104 | orchestrator | Wednesday 28 May 2025 17:10:50 +0000 (0:00:00.136) 0:00:58.977 ********* 2025-05-28 17:10:51.075031 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:10:51.075847 | orchestrator | 2025-05-28 17:10:51.077031 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-05-28 17:10:51.078151 | orchestrator | Wednesday 28 May 2025 17:10:51 +0000 (0:00:00.124) 0:00:59.102 ********* 2025-05-28 17:10:51.212632 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-91f15584-1a8a-582b-a00a-c533bea87f37', 'data_vg': 'ceph-91f15584-1a8a-582b-a00a-c533bea87f37'})  2025-05-28 17:10:51.212707 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d85522ca-9ab4-5810-aefe-18d74b0f7dbe', 'data_vg': 'ceph-d85522ca-9ab4-5810-aefe-18d74b0f7dbe'})  2025-05-28 17:10:51.215789 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:10:51.215809 | orchestrator | 2025-05-28 17:10:51.215819 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-05-28 17:10:51.216134 | orchestrator | Wednesday 28 May 2025 17:10:51 +0000 (0:00:00.135) 0:00:59.238 ********* 2025-05-28 17:10:51.346281 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:10:51.347658 | orchestrator | 2025-05-28 17:10:51.348486 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-05-28 17:10:51.349513 | orchestrator | Wednesday 28 May 2025 17:10:51 +0000 (0:00:00.135) 0:00:59.373 ********* 2025-05-28 17:10:51.501857 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-91f15584-1a8a-582b-a00a-c533bea87f37', 'data_vg': 'ceph-91f15584-1a8a-582b-a00a-c533bea87f37'})  2025-05-28 17:10:51.503682 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d85522ca-9ab4-5810-aefe-18d74b0f7dbe', 'data_vg': 'ceph-d85522ca-9ab4-5810-aefe-18d74b0f7dbe'})  2025-05-28 17:10:51.504483 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:10:51.505763 | orchestrator | 2025-05-28 17:10:51.506535 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-05-28 17:10:51.507406 | orchestrator | Wednesday 28 May 2025 17:10:51 +0000 (0:00:00.156) 0:00:59.529 ********* 2025-05-28 17:10:51.835595 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:10:51.836492 | orchestrator | 2025-05-28 17:10:51.837682 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-05-28 17:10:51.838470 | orchestrator | Wednesday 28 May 2025 17:10:51 +0000 (0:00:00.333) 0:00:59.863 ********* 2025-05-28 17:10:51.986297 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-91f15584-1a8a-582b-a00a-c533bea87f37', 'data_vg': 'ceph-91f15584-1a8a-582b-a00a-c533bea87f37'})  2025-05-28 17:10:51.987718 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d85522ca-9ab4-5810-aefe-18d74b0f7dbe', 'data_vg': 'ceph-d85522ca-9ab4-5810-aefe-18d74b0f7dbe'})  2025-05-28 17:10:51.989111 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:10:51.989703 | orchestrator | 2025-05-28 17:10:51.990632 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-05-28 17:10:51.992240 | orchestrator | Wednesday 28 May 2025 17:10:51 +0000 (0:00:00.149) 0:01:00.012 ********* 2025-05-28 17:10:52.128345 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-91f15584-1a8a-582b-a00a-c533bea87f37', 'data_vg': 'ceph-91f15584-1a8a-582b-a00a-c533bea87f37'})  2025-05-28 17:10:52.128545 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d85522ca-9ab4-5810-aefe-18d74b0f7dbe', 'data_vg': 'ceph-d85522ca-9ab4-5810-aefe-18d74b0f7dbe'})  2025-05-28 17:10:52.129673 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:10:52.130388 | orchestrator | 2025-05-28 17:10:52.132310 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-05-28 17:10:52.132519 | orchestrator | Wednesday 28 May 2025 17:10:52 +0000 (0:00:00.143) 0:01:00.156 ********* 2025-05-28 17:10:52.269939 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-91f15584-1a8a-582b-a00a-c533bea87f37', 'data_vg': 'ceph-91f15584-1a8a-582b-a00a-c533bea87f37'})  2025-05-28 17:10:52.270816 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d85522ca-9ab4-5810-aefe-18d74b0f7dbe', 'data_vg': 'ceph-d85522ca-9ab4-5810-aefe-18d74b0f7dbe'})  2025-05-28 17:10:52.272243 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:10:52.273586 | orchestrator | 2025-05-28 17:10:52.273731 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-05-28 17:10:52.274382 | orchestrator | Wednesday 28 May 2025 17:10:52 +0000 (0:00:00.141) 0:01:00.297 ********* 2025-05-28 17:10:52.409103 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:10:52.409244 | orchestrator | 2025-05-28 17:10:52.409510 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-05-28 17:10:52.410180 | orchestrator | Wednesday 28 May 2025 17:10:52 +0000 (0:00:00.138) 0:01:00.436 ********* 2025-05-28 17:10:52.536086 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:10:52.536156 | orchestrator | 2025-05-28 17:10:52.536653 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-05-28 17:10:52.536816 | orchestrator | Wednesday 28 May 2025 17:10:52 +0000 (0:00:00.127) 0:01:00.564 ********* 2025-05-28 17:10:52.657409 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:10:52.657701 | orchestrator | 2025-05-28 17:10:52.659115 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-05-28 17:10:52.662892 | orchestrator | Wednesday 28 May 2025 17:10:52 +0000 (0:00:00.120) 0:01:00.684 ********* 2025-05-28 17:10:52.793261 | orchestrator | ok: [testbed-node-5] => { 2025-05-28 17:10:52.793363 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-05-28 17:10:52.794150 | orchestrator | } 2025-05-28 17:10:52.795026 | orchestrator | 2025-05-28 17:10:52.797292 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-05-28 17:10:52.797311 | orchestrator | Wednesday 28 May 2025 17:10:52 +0000 (0:00:00.136) 0:01:00.820 ********* 2025-05-28 17:10:52.936592 | orchestrator | ok: [testbed-node-5] => { 2025-05-28 17:10:52.937702 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-05-28 17:10:52.938626 | orchestrator | } 2025-05-28 17:10:52.939507 | orchestrator | 2025-05-28 17:10:52.940426 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-05-28 17:10:52.941150 | orchestrator | Wednesday 28 May 2025 17:10:52 +0000 (0:00:00.143) 0:01:00.964 ********* 2025-05-28 17:10:53.062229 | orchestrator | ok: [testbed-node-5] => { 2025-05-28 17:10:53.062936 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-05-28 17:10:53.064394 | orchestrator | } 2025-05-28 17:10:53.065411 | orchestrator | 2025-05-28 17:10:53.068363 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-05-28 17:10:53.069028 | orchestrator | Wednesday 28 May 2025 17:10:53 +0000 (0:00:00.124) 0:01:01.089 ********* 2025-05-28 17:10:53.559854 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:10:53.560573 | orchestrator | 2025-05-28 17:10:53.562972 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-05-28 17:10:53.563019 | orchestrator | Wednesday 28 May 2025 17:10:53 +0000 (0:00:00.497) 0:01:01.586 ********* 2025-05-28 17:10:54.077499 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:10:54.078249 | orchestrator | 2025-05-28 17:10:54.079186 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-05-28 17:10:54.079730 | orchestrator | Wednesday 28 May 2025 17:10:54 +0000 (0:00:00.516) 0:01:02.103 ********* 2025-05-28 17:10:54.770432 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:10:54.770544 | orchestrator | 2025-05-28 17:10:54.772795 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-05-28 17:10:54.773418 | orchestrator | Wednesday 28 May 2025 17:10:54 +0000 (0:00:00.692) 0:01:02.796 ********* 2025-05-28 17:10:54.914720 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:10:54.915338 | orchestrator | 2025-05-28 17:10:54.916423 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-05-28 17:10:54.917160 | orchestrator | Wednesday 28 May 2025 17:10:54 +0000 (0:00:00.145) 0:01:02.942 ********* 2025-05-28 17:10:55.032774 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:10:55.033724 | orchestrator | 2025-05-28 17:10:55.034652 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-05-28 17:10:55.035660 | orchestrator | Wednesday 28 May 2025 17:10:55 +0000 (0:00:00.117) 0:01:03.059 ********* 2025-05-28 17:10:55.135571 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:10:55.135937 | orchestrator | 2025-05-28 17:10:55.137226 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-05-28 17:10:55.138114 | orchestrator | Wednesday 28 May 2025 17:10:55 +0000 (0:00:00.103) 0:01:03.163 ********* 2025-05-28 17:10:55.266521 | orchestrator | ok: [testbed-node-5] => { 2025-05-28 17:10:55.267586 | orchestrator |  "vgs_report": { 2025-05-28 17:10:55.268776 | orchestrator |  "vg": [] 2025-05-28 17:10:55.269952 | orchestrator |  } 2025-05-28 17:10:55.271099 | orchestrator | } 2025-05-28 17:10:55.271711 | orchestrator | 2025-05-28 17:10:55.272543 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-05-28 17:10:55.273468 | orchestrator | Wednesday 28 May 2025 17:10:55 +0000 (0:00:00.129) 0:01:03.292 ********* 2025-05-28 17:10:55.384494 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:10:55.385192 | orchestrator | 2025-05-28 17:10:55.386110 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-05-28 17:10:55.386359 | orchestrator | Wednesday 28 May 2025 17:10:55 +0000 (0:00:00.119) 0:01:03.412 ********* 2025-05-28 17:10:55.518710 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:10:55.519586 | orchestrator | 2025-05-28 17:10:55.519619 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-05-28 17:10:55.520614 | orchestrator | Wednesday 28 May 2025 17:10:55 +0000 (0:00:00.131) 0:01:03.543 ********* 2025-05-28 17:10:55.641193 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:10:55.641939 | orchestrator | 2025-05-28 17:10:55.642990 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-05-28 17:10:55.644458 | orchestrator | Wednesday 28 May 2025 17:10:55 +0000 (0:00:00.124) 0:01:03.668 ********* 2025-05-28 17:10:55.770108 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:10:55.770224 | orchestrator | 2025-05-28 17:10:55.770240 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-05-28 17:10:55.770253 | orchestrator | Wednesday 28 May 2025 17:10:55 +0000 (0:00:00.129) 0:01:03.797 ********* 2025-05-28 17:10:55.905564 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:10:55.905698 | orchestrator | 2025-05-28 17:10:55.905819 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-05-28 17:10:55.907428 | orchestrator | Wednesday 28 May 2025 17:10:55 +0000 (0:00:00.136) 0:01:03.933 ********* 2025-05-28 17:10:56.038747 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:10:56.038852 | orchestrator | 2025-05-28 17:10:56.039749 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-05-28 17:10:56.041632 | orchestrator | Wednesday 28 May 2025 17:10:56 +0000 (0:00:00.131) 0:01:04.065 ********* 2025-05-28 17:10:56.175602 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:10:56.175738 | orchestrator | 2025-05-28 17:10:56.178357 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-05-28 17:10:56.178490 | orchestrator | Wednesday 28 May 2025 17:10:56 +0000 (0:00:00.136) 0:01:04.201 ********* 2025-05-28 17:10:56.304808 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:10:56.305714 | orchestrator | 2025-05-28 17:10:56.305746 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-05-28 17:10:56.306669 | orchestrator | Wednesday 28 May 2025 17:10:56 +0000 (0:00:00.130) 0:01:04.331 ********* 2025-05-28 17:10:56.625099 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:10:56.625914 | orchestrator | 2025-05-28 17:10:56.625943 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-05-28 17:10:56.626541 | orchestrator | Wednesday 28 May 2025 17:10:56 +0000 (0:00:00.320) 0:01:04.652 ********* 2025-05-28 17:10:56.765484 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:10:56.766668 | orchestrator | 2025-05-28 17:10:56.767322 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-05-28 17:10:56.768509 | orchestrator | Wednesday 28 May 2025 17:10:56 +0000 (0:00:00.140) 0:01:04.793 ********* 2025-05-28 17:10:56.892733 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:10:56.892855 | orchestrator | 2025-05-28 17:10:56.893755 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-05-28 17:10:56.894584 | orchestrator | Wednesday 28 May 2025 17:10:56 +0000 (0:00:00.127) 0:01:04.920 ********* 2025-05-28 17:10:57.010157 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:10:57.010406 | orchestrator | 2025-05-28 17:10:57.010953 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-05-28 17:10:57.012049 | orchestrator | Wednesday 28 May 2025 17:10:57 +0000 (0:00:00.116) 0:01:05.037 ********* 2025-05-28 17:10:57.136467 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:10:57.137277 | orchestrator | 2025-05-28 17:10:57.137949 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-05-28 17:10:57.138619 | orchestrator | Wednesday 28 May 2025 17:10:57 +0000 (0:00:00.126) 0:01:05.163 ********* 2025-05-28 17:10:57.263680 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:10:57.264804 | orchestrator | 2025-05-28 17:10:57.267506 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-05-28 17:10:57.267771 | orchestrator | Wednesday 28 May 2025 17:10:57 +0000 (0:00:00.125) 0:01:05.289 ********* 2025-05-28 17:10:57.414434 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-91f15584-1a8a-582b-a00a-c533bea87f37', 'data_vg': 'ceph-91f15584-1a8a-582b-a00a-c533bea87f37'})  2025-05-28 17:10:57.415320 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d85522ca-9ab4-5810-aefe-18d74b0f7dbe', 'data_vg': 'ceph-d85522ca-9ab4-5810-aefe-18d74b0f7dbe'})  2025-05-28 17:10:57.415998 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:10:57.417594 | orchestrator | 2025-05-28 17:10:57.418603 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-05-28 17:10:57.419692 | orchestrator | Wednesday 28 May 2025 17:10:57 +0000 (0:00:00.152) 0:01:05.442 ********* 2025-05-28 17:10:57.556686 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-91f15584-1a8a-582b-a00a-c533bea87f37', 'data_vg': 'ceph-91f15584-1a8a-582b-a00a-c533bea87f37'})  2025-05-28 17:10:57.556873 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d85522ca-9ab4-5810-aefe-18d74b0f7dbe', 'data_vg': 'ceph-d85522ca-9ab4-5810-aefe-18d74b0f7dbe'})  2025-05-28 17:10:57.557628 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:10:57.557961 | orchestrator | 2025-05-28 17:10:57.559020 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-05-28 17:10:57.559818 | orchestrator | Wednesday 28 May 2025 17:10:57 +0000 (0:00:00.141) 0:01:05.583 ********* 2025-05-28 17:10:57.701580 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-91f15584-1a8a-582b-a00a-c533bea87f37', 'data_vg': 'ceph-91f15584-1a8a-582b-a00a-c533bea87f37'})  2025-05-28 17:10:57.701729 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d85522ca-9ab4-5810-aefe-18d74b0f7dbe', 'data_vg': 'ceph-d85522ca-9ab4-5810-aefe-18d74b0f7dbe'})  2025-05-28 17:10:57.701830 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:10:57.702833 | orchestrator | 2025-05-28 17:10:57.703649 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-05-28 17:10:57.705494 | orchestrator | Wednesday 28 May 2025 17:10:57 +0000 (0:00:00.144) 0:01:05.728 ********* 2025-05-28 17:10:57.848438 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-91f15584-1a8a-582b-a00a-c533bea87f37', 'data_vg': 'ceph-91f15584-1a8a-582b-a00a-c533bea87f37'})  2025-05-28 17:10:57.848544 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d85522ca-9ab4-5810-aefe-18d74b0f7dbe', 'data_vg': 'ceph-d85522ca-9ab4-5810-aefe-18d74b0f7dbe'})  2025-05-28 17:10:57.849674 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:10:57.851593 | orchestrator | 2025-05-28 17:10:57.851619 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-05-28 17:10:57.851697 | orchestrator | Wednesday 28 May 2025 17:10:57 +0000 (0:00:00.147) 0:01:05.875 ********* 2025-05-28 17:10:57.995769 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-91f15584-1a8a-582b-a00a-c533bea87f37', 'data_vg': 'ceph-91f15584-1a8a-582b-a00a-c533bea87f37'})  2025-05-28 17:10:57.996371 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d85522ca-9ab4-5810-aefe-18d74b0f7dbe', 'data_vg': 'ceph-d85522ca-9ab4-5810-aefe-18d74b0f7dbe'})  2025-05-28 17:10:57.997640 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:10:57.998666 | orchestrator | 2025-05-28 17:10:57.999860 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-05-28 17:10:58.000954 | orchestrator | Wednesday 28 May 2025 17:10:57 +0000 (0:00:00.147) 0:01:06.022 ********* 2025-05-28 17:10:58.132522 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-91f15584-1a8a-582b-a00a-c533bea87f37', 'data_vg': 'ceph-91f15584-1a8a-582b-a00a-c533bea87f37'})  2025-05-28 17:10:58.132761 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d85522ca-9ab4-5810-aefe-18d74b0f7dbe', 'data_vg': 'ceph-d85522ca-9ab4-5810-aefe-18d74b0f7dbe'})  2025-05-28 17:10:58.133649 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:10:58.134396 | orchestrator | 2025-05-28 17:10:58.136241 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-05-28 17:10:58.136908 | orchestrator | Wednesday 28 May 2025 17:10:58 +0000 (0:00:00.136) 0:01:06.159 ********* 2025-05-28 17:10:58.468340 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-91f15584-1a8a-582b-a00a-c533bea87f37', 'data_vg': 'ceph-91f15584-1a8a-582b-a00a-c533bea87f37'})  2025-05-28 17:10:58.468641 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d85522ca-9ab4-5810-aefe-18d74b0f7dbe', 'data_vg': 'ceph-d85522ca-9ab4-5810-aefe-18d74b0f7dbe'})  2025-05-28 17:10:58.471179 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:10:58.471835 | orchestrator | 2025-05-28 17:10:58.472352 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-05-28 17:10:58.473418 | orchestrator | Wednesday 28 May 2025 17:10:58 +0000 (0:00:00.333) 0:01:06.493 ********* 2025-05-28 17:10:58.612467 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-91f15584-1a8a-582b-a00a-c533bea87f37', 'data_vg': 'ceph-91f15584-1a8a-582b-a00a-c533bea87f37'})  2025-05-28 17:10:58.613390 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d85522ca-9ab4-5810-aefe-18d74b0f7dbe', 'data_vg': 'ceph-d85522ca-9ab4-5810-aefe-18d74b0f7dbe'})  2025-05-28 17:10:58.614673 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:10:58.616004 | orchestrator | 2025-05-28 17:10:58.616746 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-05-28 17:10:58.617261 | orchestrator | Wednesday 28 May 2025 17:10:58 +0000 (0:00:00.146) 0:01:06.639 ********* 2025-05-28 17:10:59.138770 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:10:59.138953 | orchestrator | 2025-05-28 17:10:59.138969 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-05-28 17:10:59.139599 | orchestrator | Wednesday 28 May 2025 17:10:59 +0000 (0:00:00.523) 0:01:07.163 ********* 2025-05-28 17:10:59.646522 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:10:59.648322 | orchestrator | 2025-05-28 17:10:59.648355 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-05-28 17:10:59.649010 | orchestrator | Wednesday 28 May 2025 17:10:59 +0000 (0:00:00.508) 0:01:07.672 ********* 2025-05-28 17:10:59.784283 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:10:59.785529 | orchestrator | 2025-05-28 17:10:59.786518 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-05-28 17:10:59.787457 | orchestrator | Wednesday 28 May 2025 17:10:59 +0000 (0:00:00.138) 0:01:07.811 ********* 2025-05-28 17:10:59.955624 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-91f15584-1a8a-582b-a00a-c533bea87f37', 'vg_name': 'ceph-91f15584-1a8a-582b-a00a-c533bea87f37'}) 2025-05-28 17:10:59.956263 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-d85522ca-9ab4-5810-aefe-18d74b0f7dbe', 'vg_name': 'ceph-d85522ca-9ab4-5810-aefe-18d74b0f7dbe'}) 2025-05-28 17:10:59.956988 | orchestrator | 2025-05-28 17:10:59.957825 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-05-28 17:10:59.959615 | orchestrator | Wednesday 28 May 2025 17:10:59 +0000 (0:00:00.172) 0:01:07.983 ********* 2025-05-28 17:11:00.101841 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-91f15584-1a8a-582b-a00a-c533bea87f37', 'data_vg': 'ceph-91f15584-1a8a-582b-a00a-c533bea87f37'})  2025-05-28 17:11:00.102314 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d85522ca-9ab4-5810-aefe-18d74b0f7dbe', 'data_vg': 'ceph-d85522ca-9ab4-5810-aefe-18d74b0f7dbe'})  2025-05-28 17:11:00.103501 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:11:00.104527 | orchestrator | 2025-05-28 17:11:00.105319 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-05-28 17:11:00.106351 | orchestrator | Wednesday 28 May 2025 17:11:00 +0000 (0:00:00.144) 0:01:08.128 ********* 2025-05-28 17:11:00.240118 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-91f15584-1a8a-582b-a00a-c533bea87f37', 'data_vg': 'ceph-91f15584-1a8a-582b-a00a-c533bea87f37'})  2025-05-28 17:11:00.240959 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d85522ca-9ab4-5810-aefe-18d74b0f7dbe', 'data_vg': 'ceph-d85522ca-9ab4-5810-aefe-18d74b0f7dbe'})  2025-05-28 17:11:00.242296 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:11:00.243151 | orchestrator | 2025-05-28 17:11:00.243468 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-05-28 17:11:00.244166 | orchestrator | Wednesday 28 May 2025 17:11:00 +0000 (0:00:00.139) 0:01:08.267 ********* 2025-05-28 17:11:00.406611 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-91f15584-1a8a-582b-a00a-c533bea87f37', 'data_vg': 'ceph-91f15584-1a8a-582b-a00a-c533bea87f37'})  2025-05-28 17:11:00.408187 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d85522ca-9ab4-5810-aefe-18d74b0f7dbe', 'data_vg': 'ceph-d85522ca-9ab4-5810-aefe-18d74b0f7dbe'})  2025-05-28 17:11:00.408882 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:11:00.409330 | orchestrator | 2025-05-28 17:11:00.409890 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-05-28 17:11:00.410268 | orchestrator | Wednesday 28 May 2025 17:11:00 +0000 (0:00:00.163) 0:01:08.431 ********* 2025-05-28 17:11:00.549596 | orchestrator | ok: [testbed-node-5] => { 2025-05-28 17:11:00.550367 | orchestrator |  "lvm_report": { 2025-05-28 17:11:00.550897 | orchestrator |  "lv": [ 2025-05-28 17:11:00.551666 | orchestrator |  { 2025-05-28 17:11:00.551992 | orchestrator |  "lv_name": "osd-block-91f15584-1a8a-582b-a00a-c533bea87f37", 2025-05-28 17:11:00.552941 | orchestrator |  "vg_name": "ceph-91f15584-1a8a-582b-a00a-c533bea87f37" 2025-05-28 17:11:00.553780 | orchestrator |  }, 2025-05-28 17:11:00.554321 | orchestrator |  { 2025-05-28 17:11:00.554800 | orchestrator |  "lv_name": "osd-block-d85522ca-9ab4-5810-aefe-18d74b0f7dbe", 2025-05-28 17:11:00.555789 | orchestrator |  "vg_name": "ceph-d85522ca-9ab4-5810-aefe-18d74b0f7dbe" 2025-05-28 17:11:00.556315 | orchestrator |  } 2025-05-28 17:11:00.557353 | orchestrator |  ], 2025-05-28 17:11:00.557610 | orchestrator |  "pv": [ 2025-05-28 17:11:00.558184 | orchestrator |  { 2025-05-28 17:11:00.558806 | orchestrator |  "pv_name": "/dev/sdb", 2025-05-28 17:11:00.559535 | orchestrator |  "vg_name": "ceph-91f15584-1a8a-582b-a00a-c533bea87f37" 2025-05-28 17:11:00.560443 | orchestrator |  }, 2025-05-28 17:11:00.560633 | orchestrator |  { 2025-05-28 17:11:00.561656 | orchestrator |  "pv_name": "/dev/sdc", 2025-05-28 17:11:00.562193 | orchestrator |  "vg_name": "ceph-d85522ca-9ab4-5810-aefe-18d74b0f7dbe" 2025-05-28 17:11:00.562656 | orchestrator |  } 2025-05-28 17:11:00.563357 | orchestrator |  ] 2025-05-28 17:11:00.564192 | orchestrator |  } 2025-05-28 17:11:00.564294 | orchestrator | } 2025-05-28 17:11:00.565649 | orchestrator | 2025-05-28 17:11:00.566659 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 17:11:00.566705 | orchestrator | 2025-05-28 17:11:00 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-28 17:11:00.566721 | orchestrator | 2025-05-28 17:11:00 | INFO  | Please wait and do not abort execution. 2025-05-28 17:11:00.567368 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-05-28 17:11:00.568008 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-05-28 17:11:00.568750 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-05-28 17:11:00.569444 | orchestrator | 2025-05-28 17:11:00.570001 | orchestrator | 2025-05-28 17:11:00.570633 | orchestrator | 2025-05-28 17:11:00.571333 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 17:11:00.572219 | orchestrator | Wednesday 28 May 2025 17:11:00 +0000 (0:00:00.146) 0:01:08.577 ********* 2025-05-28 17:11:00.572694 | orchestrator | =============================================================================== 2025-05-28 17:11:00.573093 | orchestrator | Create block VGs -------------------------------------------------------- 5.70s 2025-05-28 17:11:00.573884 | orchestrator | Create block LVs -------------------------------------------------------- 4.04s 2025-05-28 17:11:00.574666 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.81s 2025-05-28 17:11:00.575494 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.77s 2025-05-28 17:11:00.576067 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.58s 2025-05-28 17:11:00.576667 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.53s 2025-05-28 17:11:00.577181 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.51s 2025-05-28 17:11:00.577804 | orchestrator | Add known partitions to the list of available block devices ------------- 1.38s 2025-05-28 17:11:00.578443 | orchestrator | Add known links to the list of available block devices ------------------ 1.14s 2025-05-28 17:11:00.579245 | orchestrator | Add known partitions to the list of available block devices ------------- 0.99s 2025-05-28 17:11:00.580526 | orchestrator | Print LVM report data --------------------------------------------------- 0.89s 2025-05-28 17:11:00.580547 | orchestrator | Add known links to the list of available block devices ------------------ 0.80s 2025-05-28 17:11:00.580611 | orchestrator | Add known partitions to the list of available block devices ------------- 0.80s 2025-05-28 17:11:00.580951 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.69s 2025-05-28 17:11:00.581743 | orchestrator | Get initial list of available block devices ----------------------------- 0.65s 2025-05-28 17:11:00.581949 | orchestrator | Create DB LVs for ceph_db_wal_devices ----------------------------------- 0.64s 2025-05-28 17:11:00.582496 | orchestrator | Print 'Create WAL LVs for ceph_wal_devices' ----------------------------- 0.63s 2025-05-28 17:11:00.582599 | orchestrator | Add known links to the list of available block devices ------------------ 0.63s 2025-05-28 17:11:00.583415 | orchestrator | Check whether ceph_db_wal_devices is used exclusively ------------------- 0.62s 2025-05-28 17:11:00.583568 | orchestrator | Create DB LVs for ceph_db_devices --------------------------------------- 0.62s 2025-05-28 17:11:02.849500 | orchestrator | Registering Redlock._acquired_script 2025-05-28 17:11:02.849638 | orchestrator | Registering Redlock._extend_script 2025-05-28 17:11:02.849654 | orchestrator | Registering Redlock._release_script 2025-05-28 17:11:02.909536 | orchestrator | 2025-05-28 17:11:02 | INFO  | Task 0910bc16-4e79-4a13-87ea-1a5ac4d10f10 (facts) was prepared for execution. 2025-05-28 17:11:02.909610 | orchestrator | 2025-05-28 17:11:02 | INFO  | It takes a moment until task 0910bc16-4e79-4a13-87ea-1a5ac4d10f10 (facts) has been started and output is visible here. 2025-05-28 17:11:06.937454 | orchestrator | 2025-05-28 17:11:06.941350 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-05-28 17:11:06.941764 | orchestrator | 2025-05-28 17:11:06.941790 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-05-28 17:11:06.941802 | orchestrator | Wednesday 28 May 2025 17:11:06 +0000 (0:00:00.273) 0:00:00.273 ********* 2025-05-28 17:11:08.030294 | orchestrator | ok: [testbed-manager] 2025-05-28 17:11:08.033458 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:11:08.033491 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:11:08.033503 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:11:08.035826 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:11:08.035858 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:11:08.038156 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:11:08.038873 | orchestrator | 2025-05-28 17:11:08.039940 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-05-28 17:11:08.040554 | orchestrator | Wednesday 28 May 2025 17:11:08 +0000 (0:00:01.090) 0:00:01.363 ********* 2025-05-28 17:11:08.196288 | orchestrator | skipping: [testbed-manager] 2025-05-28 17:11:08.276492 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:11:08.355546 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:11:08.433789 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:11:08.510238 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:11:09.235964 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:11:09.237063 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:11:09.239184 | orchestrator | 2025-05-28 17:11:09.239333 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-28 17:11:09.239807 | orchestrator | 2025-05-28 17:11:09.240414 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-28 17:11:09.241246 | orchestrator | Wednesday 28 May 2025 17:11:09 +0000 (0:00:01.209) 0:00:02.573 ********* 2025-05-28 17:11:15.222280 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:11:15.222744 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:11:15.223548 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:11:15.224338 | orchestrator | ok: [testbed-manager] 2025-05-28 17:11:15.226105 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:11:15.226330 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:11:15.227315 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:11:15.228400 | orchestrator | 2025-05-28 17:11:15.228509 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-05-28 17:11:15.229555 | orchestrator | 2025-05-28 17:11:15.229664 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-05-28 17:11:15.230404 | orchestrator | Wednesday 28 May 2025 17:11:15 +0000 (0:00:05.987) 0:00:08.560 ********* 2025-05-28 17:11:15.381638 | orchestrator | skipping: [testbed-manager] 2025-05-28 17:11:15.452983 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:11:15.534285 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:11:15.608539 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:11:15.685071 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:11:15.732829 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:11:15.733581 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:11:15.733949 | orchestrator | 2025-05-28 17:11:15.734735 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 17:11:15.735254 | orchestrator | 2025-05-28 17:11:15 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-28 17:11:15.735448 | orchestrator | 2025-05-28 17:11:15 | INFO  | Please wait and do not abort execution. 2025-05-28 17:11:15.736359 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 17:11:15.737293 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 17:11:15.737982 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 17:11:15.738538 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 17:11:15.739309 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 17:11:15.739975 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 17:11:15.740231 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 17:11:15.740611 | orchestrator | 2025-05-28 17:11:15.741039 | orchestrator | 2025-05-28 17:11:15.741422 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 17:11:15.741812 | orchestrator | Wednesday 28 May 2025 17:11:15 +0000 (0:00:00.510) 0:00:09.071 ********* 2025-05-28 17:11:15.742253 | orchestrator | =============================================================================== 2025-05-28 17:11:15.743039 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.99s 2025-05-28 17:11:15.743250 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.21s 2025-05-28 17:11:15.743965 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.09s 2025-05-28 17:11:15.744438 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.51s 2025-05-28 17:11:16.356643 | orchestrator | 2025-05-28 17:11:16.357951 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Wed May 28 17:11:16 UTC 2025 2025-05-28 17:11:16.357980 | orchestrator | 2025-05-28 17:11:18.026689 | orchestrator | 2025-05-28 17:11:18 | INFO  | Collection nutshell is prepared for execution 2025-05-28 17:11:18.028347 | orchestrator | 2025-05-28 17:11:18 | INFO  | D [0] - dotfiles 2025-05-28 17:11:18.031658 | orchestrator | Registering Redlock._acquired_script 2025-05-28 17:11:18.031707 | orchestrator | Registering Redlock._extend_script 2025-05-28 17:11:18.031719 | orchestrator | Registering Redlock._release_script 2025-05-28 17:11:18.037096 | orchestrator | 2025-05-28 17:11:18 | INFO  | D [0] - homer 2025-05-28 17:11:18.037126 | orchestrator | 2025-05-28 17:11:18 | INFO  | D [0] - netdata 2025-05-28 17:11:18.037138 | orchestrator | 2025-05-28 17:11:18 | INFO  | D [0] - openstackclient 2025-05-28 17:11:18.037150 | orchestrator | 2025-05-28 17:11:18 | INFO  | D [0] - phpmyadmin 2025-05-28 17:11:18.037161 | orchestrator | 2025-05-28 17:11:18 | INFO  | A [0] - common 2025-05-28 17:11:18.038692 | orchestrator | 2025-05-28 17:11:18 | INFO  | A [1] -- loadbalancer 2025-05-28 17:11:18.038820 | orchestrator | 2025-05-28 17:11:18 | INFO  | D [2] --- opensearch 2025-05-28 17:11:18.038837 | orchestrator | 2025-05-28 17:11:18 | INFO  | A [2] --- mariadb-ng 2025-05-28 17:11:18.038916 | orchestrator | 2025-05-28 17:11:18 | INFO  | D [3] ---- horizon 2025-05-28 17:11:18.038934 | orchestrator | 2025-05-28 17:11:18 | INFO  | A [3] ---- keystone 2025-05-28 17:11:18.039140 | orchestrator | 2025-05-28 17:11:18 | INFO  | A [4] ----- neutron 2025-05-28 17:11:18.039162 | orchestrator | 2025-05-28 17:11:18 | INFO  | D [5] ------ wait-for-nova 2025-05-28 17:11:18.039175 | orchestrator | 2025-05-28 17:11:18 | INFO  | A [5] ------ octavia 2025-05-28 17:11:18.040074 | orchestrator | 2025-05-28 17:11:18 | INFO  | D [4] ----- barbican 2025-05-28 17:11:18.040100 | orchestrator | 2025-05-28 17:11:18 | INFO  | D [4] ----- designate 2025-05-28 17:11:18.040114 | orchestrator | 2025-05-28 17:11:18 | INFO  | D [4] ----- ironic 2025-05-28 17:11:18.040128 | orchestrator | 2025-05-28 17:11:18 | INFO  | D [4] ----- placement 2025-05-28 17:11:18.040138 | orchestrator | 2025-05-28 17:11:18 | INFO  | D [4] ----- magnum 2025-05-28 17:11:18.040334 | orchestrator | 2025-05-28 17:11:18 | INFO  | A [1] -- openvswitch 2025-05-28 17:11:18.040356 | orchestrator | 2025-05-28 17:11:18 | INFO  | D [2] --- ovn 2025-05-28 17:11:18.040726 | orchestrator | 2025-05-28 17:11:18 | INFO  | D [1] -- memcached 2025-05-28 17:11:18.040747 | orchestrator | 2025-05-28 17:11:18 | INFO  | D [1] -- redis 2025-05-28 17:11:18.040759 | orchestrator | 2025-05-28 17:11:18 | INFO  | D [1] -- rabbitmq-ng 2025-05-28 17:11:18.040974 | orchestrator | 2025-05-28 17:11:18 | INFO  | A [0] - kubernetes 2025-05-28 17:11:18.042471 | orchestrator | 2025-05-28 17:11:18 | INFO  | D [1] -- kubeconfig 2025-05-28 17:11:18.042707 | orchestrator | 2025-05-28 17:11:18 | INFO  | A [1] -- copy-kubeconfig 2025-05-28 17:11:18.042728 | orchestrator | 2025-05-28 17:11:18 | INFO  | A [0] - ceph 2025-05-28 17:11:18.044491 | orchestrator | 2025-05-28 17:11:18 | INFO  | A [1] -- ceph-pools 2025-05-28 17:11:18.044516 | orchestrator | 2025-05-28 17:11:18 | INFO  | A [2] --- copy-ceph-keys 2025-05-28 17:11:18.044607 | orchestrator | 2025-05-28 17:11:18 | INFO  | A [3] ---- cephclient 2025-05-28 17:11:18.044685 | orchestrator | 2025-05-28 17:11:18 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-05-28 17:11:18.044702 | orchestrator | 2025-05-28 17:11:18 | INFO  | A [4] ----- wait-for-keystone 2025-05-28 17:11:18.044961 | orchestrator | 2025-05-28 17:11:18 | INFO  | D [5] ------ kolla-ceph-rgw 2025-05-28 17:11:18.045011 | orchestrator | 2025-05-28 17:11:18 | INFO  | D [5] ------ glance 2025-05-28 17:11:18.045104 | orchestrator | 2025-05-28 17:11:18 | INFO  | D [5] ------ cinder 2025-05-28 17:11:18.045121 | orchestrator | 2025-05-28 17:11:18 | INFO  | D [5] ------ nova 2025-05-28 17:11:18.045532 | orchestrator | 2025-05-28 17:11:18 | INFO  | A [4] ----- prometheus 2025-05-28 17:11:18.045553 | orchestrator | 2025-05-28 17:11:18 | INFO  | D [5] ------ grafana 2025-05-28 17:11:18.222356 | orchestrator | 2025-05-28 17:11:18 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-05-28 17:11:18.222472 | orchestrator | 2025-05-28 17:11:18 | INFO  | Tasks are running in the background 2025-05-28 17:11:20.760821 | orchestrator | 2025-05-28 17:11:20 | INFO  | No task IDs specified, wait for all currently running tasks 2025-05-28 17:11:22.864048 | orchestrator | 2025-05-28 17:11:22 | INFO  | Task d6b65c06-0eb8-46cb-90f5-3ab920eed557 is in state STARTED 2025-05-28 17:11:22.864717 | orchestrator | 2025-05-28 17:11:22 | INFO  | Task cedce0cb-b944-4c43-88e2-4221b658cbc5 is in state STARTED 2025-05-28 17:11:22.866869 | orchestrator | 2025-05-28 17:11:22 | INFO  | Task 9dbb75ed-1f68-41be-afd3-4273c9c8cdb8 is in state STARTED 2025-05-28 17:11:22.867390 | orchestrator | 2025-05-28 17:11:22 | INFO  | Task 5782cb1b-6104-4996-b9ae-5fe82aaa6314 is in state STARTED 2025-05-28 17:11:22.870509 | orchestrator | 2025-05-28 17:11:22 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:11:22.871040 | orchestrator | 2025-05-28 17:11:22 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:11:22.878936 | orchestrator | 2025-05-28 17:11:22 | INFO  | Task 1273927c-989e-4a38-8d59-43fa1848ade1 is in state STARTED 2025-05-28 17:11:22.878964 | orchestrator | 2025-05-28 17:11:22 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:11:25.915416 | orchestrator | 2025-05-28 17:11:25 | INFO  | Task d6b65c06-0eb8-46cb-90f5-3ab920eed557 is in state STARTED 2025-05-28 17:11:25.915657 | orchestrator | 2025-05-28 17:11:25 | INFO  | Task cedce0cb-b944-4c43-88e2-4221b658cbc5 is in state STARTED 2025-05-28 17:11:25.916094 | orchestrator | 2025-05-28 17:11:25 | INFO  | Task 9dbb75ed-1f68-41be-afd3-4273c9c8cdb8 is in state STARTED 2025-05-28 17:11:25.916704 | orchestrator | 2025-05-28 17:11:25 | INFO  | Task 5782cb1b-6104-4996-b9ae-5fe82aaa6314 is in state STARTED 2025-05-28 17:11:25.917465 | orchestrator | 2025-05-28 17:11:25 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:11:25.917900 | orchestrator | 2025-05-28 17:11:25 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:11:25.918961 | orchestrator | 2025-05-28 17:11:25 | INFO  | Task 1273927c-989e-4a38-8d59-43fa1848ade1 is in state STARTED 2025-05-28 17:11:25.919108 | orchestrator | 2025-05-28 17:11:25 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:11:28.999821 | orchestrator | 2025-05-28 17:11:28 | INFO  | Task d6b65c06-0eb8-46cb-90f5-3ab920eed557 is in state STARTED 2025-05-28 17:11:28.999937 | orchestrator | 2025-05-28 17:11:28 | INFO  | Task cedce0cb-b944-4c43-88e2-4221b658cbc5 is in state STARTED 2025-05-28 17:11:28.999953 | orchestrator | 2025-05-28 17:11:28 | INFO  | Task 9dbb75ed-1f68-41be-afd3-4273c9c8cdb8 is in state STARTED 2025-05-28 17:11:29.000021 | orchestrator | 2025-05-28 17:11:28 | INFO  | Task 5782cb1b-6104-4996-b9ae-5fe82aaa6314 is in state STARTED 2025-05-28 17:11:29.000034 | orchestrator | 2025-05-28 17:11:28 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:11:29.000045 | orchestrator | 2025-05-28 17:11:28 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:11:29.001637 | orchestrator | 2025-05-28 17:11:29 | INFO  | Task 1273927c-989e-4a38-8d59-43fa1848ade1 is in state STARTED 2025-05-28 17:11:29.001666 | orchestrator | 2025-05-28 17:11:29 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:11:32.066515 | orchestrator | 2025-05-28 17:11:32 | INFO  | Task d6b65c06-0eb8-46cb-90f5-3ab920eed557 is in state STARTED 2025-05-28 17:11:32.068502 | orchestrator | 2025-05-28 17:11:32 | INFO  | Task cedce0cb-b944-4c43-88e2-4221b658cbc5 is in state STARTED 2025-05-28 17:11:32.069229 | orchestrator | 2025-05-28 17:11:32 | INFO  | Task 9dbb75ed-1f68-41be-afd3-4273c9c8cdb8 is in state STARTED 2025-05-28 17:11:32.069820 | orchestrator | 2025-05-28 17:11:32 | INFO  | Task 5782cb1b-6104-4996-b9ae-5fe82aaa6314 is in state STARTED 2025-05-28 17:11:32.077339 | orchestrator | 2025-05-28 17:11:32 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:11:32.078507 | orchestrator | 2025-05-28 17:11:32 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:11:32.079191 | orchestrator | 2025-05-28 17:11:32 | INFO  | Task 1273927c-989e-4a38-8d59-43fa1848ade1 is in state STARTED 2025-05-28 17:11:32.079383 | orchestrator | 2025-05-28 17:11:32 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:11:35.123757 | orchestrator | 2025-05-28 17:11:35 | INFO  | Task d6b65c06-0eb8-46cb-90f5-3ab920eed557 is in state STARTED 2025-05-28 17:11:35.126344 | orchestrator | 2025-05-28 17:11:35 | INFO  | Task cedce0cb-b944-4c43-88e2-4221b658cbc5 is in state STARTED 2025-05-28 17:11:35.126377 | orchestrator | 2025-05-28 17:11:35 | INFO  | Task 9dbb75ed-1f68-41be-afd3-4273c9c8cdb8 is in state STARTED 2025-05-28 17:11:35.127722 | orchestrator | 2025-05-28 17:11:35 | INFO  | Task 5782cb1b-6104-4996-b9ae-5fe82aaa6314 is in state STARTED 2025-05-28 17:11:35.132211 | orchestrator | 2025-05-28 17:11:35 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:11:35.132236 | orchestrator | 2025-05-28 17:11:35 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:11:35.132249 | orchestrator | 2025-05-28 17:11:35 | INFO  | Task 1273927c-989e-4a38-8d59-43fa1848ade1 is in state STARTED 2025-05-28 17:11:35.138459 | orchestrator | 2025-05-28 17:11:35 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:11:38.184347 | orchestrator | 2025-05-28 17:11:38 | INFO  | Task d6b65c06-0eb8-46cb-90f5-3ab920eed557 is in state STARTED 2025-05-28 17:11:38.184773 | orchestrator | 2025-05-28 17:11:38 | INFO  | Task cedce0cb-b944-4c43-88e2-4221b658cbc5 is in state STARTED 2025-05-28 17:11:38.186473 | orchestrator | 2025-05-28 17:11:38 | INFO  | Task 9dbb75ed-1f68-41be-afd3-4273c9c8cdb8 is in state STARTED 2025-05-28 17:11:38.190079 | orchestrator | 2025-05-28 17:11:38 | INFO  | Task 5782cb1b-6104-4996-b9ae-5fe82aaa6314 is in state STARTED 2025-05-28 17:11:38.193485 | orchestrator | 2025-05-28 17:11:38 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:11:38.193532 | orchestrator | 2025-05-28 17:11:38 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:11:38.193583 | orchestrator | 2025-05-28 17:11:38 | INFO  | Task 1273927c-989e-4a38-8d59-43fa1848ade1 is in state STARTED 2025-05-28 17:11:38.193719 | orchestrator | 2025-05-28 17:11:38 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:11:41.270613 | orchestrator | 2025-05-28 17:11:41 | INFO  | Task d6b65c06-0eb8-46cb-90f5-3ab920eed557 is in state STARTED 2025-05-28 17:11:41.274747 | orchestrator | 2025-05-28 17:11:41 | INFO  | Task cedce0cb-b944-4c43-88e2-4221b658cbc5 is in state STARTED 2025-05-28 17:11:41.275191 | orchestrator | 2025-05-28 17:11:41 | INFO  | Task 9dbb75ed-1f68-41be-afd3-4273c9c8cdb8 is in state STARTED 2025-05-28 17:11:41.276352 | orchestrator | 2025-05-28 17:11:41 | INFO  | Task 5782cb1b-6104-4996-b9ae-5fe82aaa6314 is in state STARTED 2025-05-28 17:11:41.278614 | orchestrator | 2025-05-28 17:11:41 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:11:41.282104 | orchestrator | 2025-05-28 17:11:41 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:11:41.283980 | orchestrator | 2025-05-28 17:11:41 | INFO  | Task 1273927c-989e-4a38-8d59-43fa1848ade1 is in state STARTED 2025-05-28 17:11:41.284741 | orchestrator | 2025-05-28 17:11:41 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:11:44.342731 | orchestrator | 2025-05-28 17:11:44 | INFO  | Task d6b65c06-0eb8-46cb-90f5-3ab920eed557 is in state STARTED 2025-05-28 17:11:44.342908 | orchestrator | 2025-05-28 17:11:44 | INFO  | Task cedce0cb-b944-4c43-88e2-4221b658cbc5 is in state STARTED 2025-05-28 17:11:44.345416 | orchestrator | 2025-05-28 17:11:44 | INFO  | Task 9dbb75ed-1f68-41be-afd3-4273c9c8cdb8 is in state STARTED 2025-05-28 17:11:44.347521 | orchestrator | 2025-05-28 17:11:44 | INFO  | Task 5782cb1b-6104-4996-b9ae-5fe82aaa6314 is in state STARTED 2025-05-28 17:11:44.350503 | orchestrator | 2025-05-28 17:11:44 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:11:44.353861 | orchestrator | 2025-05-28 17:11:44 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:11:44.355466 | orchestrator | 2025-05-28 17:11:44 | INFO  | Task 1273927c-989e-4a38-8d59-43fa1848ade1 is in state STARTED 2025-05-28 17:11:44.355495 | orchestrator | 2025-05-28 17:11:44 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:11:47.414124 | orchestrator | 2025-05-28 17:11:47 | INFO  | Task d6b65c06-0eb8-46cb-90f5-3ab920eed557 is in state STARTED 2025-05-28 17:11:47.419733 | orchestrator | 2025-05-28 17:11:47 | INFO  | Task cedce0cb-b944-4c43-88e2-4221b658cbc5 is in state STARTED 2025-05-28 17:11:47.423144 | orchestrator | 2025-05-28 17:11:47.423182 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-05-28 17:11:47.423195 | orchestrator | 2025-05-28 17:11:47.423206 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-05-28 17:11:47.423217 | orchestrator | Wednesday 28 May 2025 17:11:28 +0000 (0:00:00.535) 0:00:00.535 ********* 2025-05-28 17:11:47.423229 | orchestrator | changed: [testbed-manager] 2025-05-28 17:11:47.423241 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:11:47.423252 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:11:47.423263 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:11:47.423273 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:11:47.423283 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:11:47.423294 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:11:47.423304 | orchestrator | 2025-05-28 17:11:47.423315 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-05-28 17:11:47.423326 | orchestrator | Wednesday 28 May 2025 17:11:32 +0000 (0:00:03.835) 0:00:04.370 ********* 2025-05-28 17:11:47.423337 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-05-28 17:11:47.423349 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-05-28 17:11:47.423359 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-05-28 17:11:47.423370 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-05-28 17:11:47.423380 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-05-28 17:11:47.423391 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-05-28 17:11:47.423401 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-05-28 17:11:47.423412 | orchestrator | 2025-05-28 17:11:47.423422 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-05-28 17:11:47.423434 | orchestrator | Wednesday 28 May 2025 17:11:34 +0000 (0:00:01.937) 0:00:06.308 ********* 2025-05-28 17:11:47.423459 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-28 17:11:33.387120', 'end': '2025-05-28 17:11:33.391807', 'delta': '0:00:00.004687', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-28 17:11:47.423496 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-28 17:11:33.402775', 'end': '2025-05-28 17:11:33.413265', 'delta': '0:00:00.010490', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-28 17:11:47.423509 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-28 17:11:33.735277', 'end': '2025-05-28 17:11:33.744307', 'delta': '0:00:00.009030', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-28 17:11:47.423544 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-28 17:11:33.915552', 'end': '2025-05-28 17:11:33.921159', 'delta': '0:00:00.005607', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-28 17:11:47.423556 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-28 17:11:34.109777', 'end': '2025-05-28 17:11:34.115141', 'delta': '0:00:00.005364', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-28 17:11:47.425091 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-28 17:11:34.342067', 'end': '2025-05-28 17:11:34.352531', 'delta': '0:00:00.010464', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-28 17:11:47.425171 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-28 17:11:34.397571', 'end': '2025-05-28 17:11:34.405778', 'delta': '0:00:00.008207', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-28 17:11:47.425184 | orchestrator | 2025-05-28 17:11:47.425196 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-05-28 17:11:47.425208 | orchestrator | Wednesday 28 May 2025 17:11:37 +0000 (0:00:02.463) 0:00:08.772 ********* 2025-05-28 17:11:47.425219 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-05-28 17:11:47.425230 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-05-28 17:11:47.425241 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-05-28 17:11:47.425252 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-05-28 17:11:47.425262 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-05-28 17:11:47.425272 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-05-28 17:11:47.425283 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-05-28 17:11:47.425293 | orchestrator | 2025-05-28 17:11:47.425304 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-05-28 17:11:47.425315 | orchestrator | Wednesday 28 May 2025 17:11:40 +0000 (0:00:03.266) 0:00:12.039 ********* 2025-05-28 17:11:47.425325 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-05-28 17:11:47.425336 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-05-28 17:11:47.425347 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-05-28 17:11:47.425357 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-05-28 17:11:47.425368 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-05-28 17:11:47.425378 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-05-28 17:11:47.425389 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-05-28 17:11:47.425399 | orchestrator | 2025-05-28 17:11:47.425410 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 17:11:47.425434 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 17:11:47.425447 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 17:11:47.425459 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 17:11:47.425470 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 17:11:47.425480 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 17:11:47.425491 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 17:11:47.425502 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 17:11:47.425521 | orchestrator | 2025-05-28 17:11:47.425532 | orchestrator | 2025-05-28 17:11:47.425542 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 17:11:47.425553 | orchestrator | Wednesday 28 May 2025 17:11:46 +0000 (0:00:05.751) 0:00:17.790 ********* 2025-05-28 17:11:47.425564 | orchestrator | =============================================================================== 2025-05-28 17:11:47.425575 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 5.75s 2025-05-28 17:11:47.425585 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 3.84s 2025-05-28 17:11:47.425596 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 3.27s 2025-05-28 17:11:47.425607 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.46s 2025-05-28 17:11:47.425618 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.94s 2025-05-28 17:11:47.425665 | orchestrator | 2025-05-28 17:11:47 | INFO  | Task 9dbb75ed-1f68-41be-afd3-4273c9c8cdb8 is in state STARTED 2025-05-28 17:11:47.425678 | orchestrator | 2025-05-28 17:11:47 | INFO  | Task 5782cb1b-6104-4996-b9ae-5fe82aaa6314 is in state SUCCESS 2025-05-28 17:11:47.425763 | orchestrator | 2025-05-28 17:11:47 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:11:47.426264 | orchestrator | 2025-05-28 17:11:47 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:11:47.432066 | orchestrator | 2025-05-28 17:11:47 | INFO  | Task 1273927c-989e-4a38-8d59-43fa1848ade1 is in state STARTED 2025-05-28 17:11:47.432175 | orchestrator | 2025-05-28 17:11:47 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:11:50.529171 | orchestrator | 2025-05-28 17:11:50 | INFO  | Task d6b65c06-0eb8-46cb-90f5-3ab920eed557 is in state STARTED 2025-05-28 17:11:50.529437 | orchestrator | 2025-05-28 17:11:50 | INFO  | Task cedce0cb-b944-4c43-88e2-4221b658cbc5 is in state STARTED 2025-05-28 17:11:50.529458 | orchestrator | 2025-05-28 17:11:50 | INFO  | Task a709d64a-03a4-4354-9977-e19f0194bf73 is in state STARTED 2025-05-28 17:11:50.529516 | orchestrator | 2025-05-28 17:11:50 | INFO  | Task 9dbb75ed-1f68-41be-afd3-4273c9c8cdb8 is in state STARTED 2025-05-28 17:11:50.529578 | orchestrator | 2025-05-28 17:11:50 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:11:50.530112 | orchestrator | 2025-05-28 17:11:50 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:11:50.531351 | orchestrator | 2025-05-28 17:11:50 | INFO  | Task 1273927c-989e-4a38-8d59-43fa1848ade1 is in state STARTED 2025-05-28 17:11:50.531459 | orchestrator | 2025-05-28 17:11:50 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:11:53.572301 | orchestrator | 2025-05-28 17:11:53 | INFO  | Task d6b65c06-0eb8-46cb-90f5-3ab920eed557 is in state STARTED 2025-05-28 17:11:53.572824 | orchestrator | 2025-05-28 17:11:53 | INFO  | Task cedce0cb-b944-4c43-88e2-4221b658cbc5 is in state STARTED 2025-05-28 17:11:53.574326 | orchestrator | 2025-05-28 17:11:53 | INFO  | Task a709d64a-03a4-4354-9977-e19f0194bf73 is in state STARTED 2025-05-28 17:11:53.574885 | orchestrator | 2025-05-28 17:11:53 | INFO  | Task 9dbb75ed-1f68-41be-afd3-4273c9c8cdb8 is in state STARTED 2025-05-28 17:11:53.576115 | orchestrator | 2025-05-28 17:11:53 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:11:53.576575 | orchestrator | 2025-05-28 17:11:53 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:11:53.578216 | orchestrator | 2025-05-28 17:11:53 | INFO  | Task 1273927c-989e-4a38-8d59-43fa1848ade1 is in state STARTED 2025-05-28 17:11:53.578239 | orchestrator | 2025-05-28 17:11:53 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:11:56.647695 | orchestrator | 2025-05-28 17:11:56 | INFO  | Task d6b65c06-0eb8-46cb-90f5-3ab920eed557 is in state STARTED 2025-05-28 17:11:56.647830 | orchestrator | 2025-05-28 17:11:56 | INFO  | Task cedce0cb-b944-4c43-88e2-4221b658cbc5 is in state STARTED 2025-05-28 17:11:56.649064 | orchestrator | 2025-05-28 17:11:56 | INFO  | Task a709d64a-03a4-4354-9977-e19f0194bf73 is in state STARTED 2025-05-28 17:11:56.650466 | orchestrator | 2025-05-28 17:11:56 | INFO  | Task 9dbb75ed-1f68-41be-afd3-4273c9c8cdb8 is in state STARTED 2025-05-28 17:11:56.652489 | orchestrator | 2025-05-28 17:11:56 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:11:56.655103 | orchestrator | 2025-05-28 17:11:56 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:11:56.656173 | orchestrator | 2025-05-28 17:11:56 | INFO  | Task 1273927c-989e-4a38-8d59-43fa1848ade1 is in state STARTED 2025-05-28 17:11:56.656522 | orchestrator | 2025-05-28 17:11:56 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:11:59.701070 | orchestrator | 2025-05-28 17:11:59 | INFO  | Task d6b65c06-0eb8-46cb-90f5-3ab920eed557 is in state STARTED 2025-05-28 17:11:59.701545 | orchestrator | 2025-05-28 17:11:59 | INFO  | Task cedce0cb-b944-4c43-88e2-4221b658cbc5 is in state STARTED 2025-05-28 17:11:59.701748 | orchestrator | 2025-05-28 17:11:59 | INFO  | Task a709d64a-03a4-4354-9977-e19f0194bf73 is in state STARTED 2025-05-28 17:11:59.702964 | orchestrator | 2025-05-28 17:11:59 | INFO  | Task 9dbb75ed-1f68-41be-afd3-4273c9c8cdb8 is in state STARTED 2025-05-28 17:11:59.707036 | orchestrator | 2025-05-28 17:11:59 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:11:59.707635 | orchestrator | 2025-05-28 17:11:59 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:11:59.707970 | orchestrator | 2025-05-28 17:11:59 | INFO  | Task 1273927c-989e-4a38-8d59-43fa1848ade1 is in state STARTED 2025-05-28 17:11:59.707997 | orchestrator | 2025-05-28 17:11:59 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:12:02.761958 | orchestrator | 2025-05-28 17:12:02 | INFO  | Task d6b65c06-0eb8-46cb-90f5-3ab920eed557 is in state STARTED 2025-05-28 17:12:02.763939 | orchestrator | 2025-05-28 17:12:02 | INFO  | Task cedce0cb-b944-4c43-88e2-4221b658cbc5 is in state STARTED 2025-05-28 17:12:02.775225 | orchestrator | 2025-05-28 17:12:02 | INFO  | Task a709d64a-03a4-4354-9977-e19f0194bf73 is in state STARTED 2025-05-28 17:12:02.776299 | orchestrator | 2025-05-28 17:12:02 | INFO  | Task 9dbb75ed-1f68-41be-afd3-4273c9c8cdb8 is in state STARTED 2025-05-28 17:12:02.778992 | orchestrator | 2025-05-28 17:12:02 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:12:02.783765 | orchestrator | 2025-05-28 17:12:02 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:12:02.792253 | orchestrator | 2025-05-28 17:12:02 | INFO  | Task 1273927c-989e-4a38-8d59-43fa1848ade1 is in state STARTED 2025-05-28 17:12:02.792285 | orchestrator | 2025-05-28 17:12:02 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:12:05.852445 | orchestrator | 2025-05-28 17:12:05 | INFO  | Task d6b65c06-0eb8-46cb-90f5-3ab920eed557 is in state SUCCESS 2025-05-28 17:12:05.852553 | orchestrator | 2025-05-28 17:12:05 | INFO  | Task cedce0cb-b944-4c43-88e2-4221b658cbc5 is in state STARTED 2025-05-28 17:12:05.855686 | orchestrator | 2025-05-28 17:12:05 | INFO  | Task a709d64a-03a4-4354-9977-e19f0194bf73 is in state STARTED 2025-05-28 17:12:05.859151 | orchestrator | 2025-05-28 17:12:05 | INFO  | Task 9dbb75ed-1f68-41be-afd3-4273c9c8cdb8 is in state STARTED 2025-05-28 17:12:05.863039 | orchestrator | 2025-05-28 17:12:05 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:12:05.865009 | orchestrator | 2025-05-28 17:12:05 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:12:05.866332 | orchestrator | 2025-05-28 17:12:05 | INFO  | Task 1273927c-989e-4a38-8d59-43fa1848ade1 is in state STARTED 2025-05-28 17:12:05.866354 | orchestrator | 2025-05-28 17:12:05 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:12:08.935089 | orchestrator | 2025-05-28 17:12:08 | INFO  | Task cedce0cb-b944-4c43-88e2-4221b658cbc5 is in state STARTED 2025-05-28 17:12:08.940625 | orchestrator | 2025-05-28 17:12:08 | INFO  | Task a709d64a-03a4-4354-9977-e19f0194bf73 is in state STARTED 2025-05-28 17:12:08.943179 | orchestrator | 2025-05-28 17:12:08 | INFO  | Task 9dbb75ed-1f68-41be-afd3-4273c9c8cdb8 is in state STARTED 2025-05-28 17:12:08.946384 | orchestrator | 2025-05-28 17:12:08 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:12:08.949050 | orchestrator | 2025-05-28 17:12:08 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:12:08.951519 | orchestrator | 2025-05-28 17:12:08 | INFO  | Task 1273927c-989e-4a38-8d59-43fa1848ade1 is in state STARTED 2025-05-28 17:12:08.954159 | orchestrator | 2025-05-28 17:12:08 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:12:12.069301 | orchestrator | 2025-05-28 17:12:12 | INFO  | Task cedce0cb-b944-4c43-88e2-4221b658cbc5 is in state STARTED 2025-05-28 17:12:12.069423 | orchestrator | 2025-05-28 17:12:12 | INFO  | Task a709d64a-03a4-4354-9977-e19f0194bf73 is in state STARTED 2025-05-28 17:12:12.071223 | orchestrator | 2025-05-28 17:12:12 | INFO  | Task 9dbb75ed-1f68-41be-afd3-4273c9c8cdb8 is in state STARTED 2025-05-28 17:12:12.077201 | orchestrator | 2025-05-28 17:12:12 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:12:12.077231 | orchestrator | 2025-05-28 17:12:12 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:12:12.088242 | orchestrator | 2025-05-28 17:12:12 | INFO  | Task 1273927c-989e-4a38-8d59-43fa1848ade1 is in state STARTED 2025-05-28 17:12:12.088308 | orchestrator | 2025-05-28 17:12:12 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:12:15.150154 | orchestrator | 2025-05-28 17:12:15 | INFO  | Task cedce0cb-b944-4c43-88e2-4221b658cbc5 is in state STARTED 2025-05-28 17:12:15.151228 | orchestrator | 2025-05-28 17:12:15 | INFO  | Task a709d64a-03a4-4354-9977-e19f0194bf73 is in state STARTED 2025-05-28 17:12:15.162291 | orchestrator | 2025-05-28 17:12:15 | INFO  | Task 9dbb75ed-1f68-41be-afd3-4273c9c8cdb8 is in state STARTED 2025-05-28 17:12:15.162324 | orchestrator | 2025-05-28 17:12:15 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:12:15.165827 | orchestrator | 2025-05-28 17:12:15 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:12:15.167972 | orchestrator | 2025-05-28 17:12:15 | INFO  | Task 1273927c-989e-4a38-8d59-43fa1848ade1 is in state STARTED 2025-05-28 17:12:15.174270 | orchestrator | 2025-05-28 17:12:15 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:12:18.211722 | orchestrator | 2025-05-28 17:12:18 | INFO  | Task cedce0cb-b944-4c43-88e2-4221b658cbc5 is in state SUCCESS 2025-05-28 17:12:18.211843 | orchestrator | 2025-05-28 17:12:18 | INFO  | Task a709d64a-03a4-4354-9977-e19f0194bf73 is in state STARTED 2025-05-28 17:12:18.215146 | orchestrator | 2025-05-28 17:12:18 | INFO  | Task 9dbb75ed-1f68-41be-afd3-4273c9c8cdb8 is in state STARTED 2025-05-28 17:12:18.216801 | orchestrator | 2025-05-28 17:12:18 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:12:18.217489 | orchestrator | 2025-05-28 17:12:18 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:12:18.217528 | orchestrator | 2025-05-28 17:12:18 | INFO  | Task 1273927c-989e-4a38-8d59-43fa1848ade1 is in state STARTED 2025-05-28 17:12:18.217542 | orchestrator | 2025-05-28 17:12:18 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:12:21.257445 | orchestrator | 2025-05-28 17:12:21 | INFO  | Task a709d64a-03a4-4354-9977-e19f0194bf73 is in state STARTED 2025-05-28 17:12:21.258695 | orchestrator | 2025-05-28 17:12:21 | INFO  | Task 9dbb75ed-1f68-41be-afd3-4273c9c8cdb8 is in state STARTED 2025-05-28 17:12:21.259269 | orchestrator | 2025-05-28 17:12:21 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:12:21.259840 | orchestrator | 2025-05-28 17:12:21 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:12:21.260449 | orchestrator | 2025-05-28 17:12:21 | INFO  | Task 1273927c-989e-4a38-8d59-43fa1848ade1 is in state STARTED 2025-05-28 17:12:21.260469 | orchestrator | 2025-05-28 17:12:21 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:12:24.334669 | orchestrator | 2025-05-28 17:12:24 | INFO  | Task a709d64a-03a4-4354-9977-e19f0194bf73 is in state STARTED 2025-05-28 17:12:24.337006 | orchestrator | 2025-05-28 17:12:24 | INFO  | Task 9dbb75ed-1f68-41be-afd3-4273c9c8cdb8 is in state STARTED 2025-05-28 17:12:24.339743 | orchestrator | 2025-05-28 17:12:24 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:12:24.348382 | orchestrator | 2025-05-28 17:12:24 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:12:24.349596 | orchestrator | 2025-05-28 17:12:24 | INFO  | Task 1273927c-989e-4a38-8d59-43fa1848ade1 is in state STARTED 2025-05-28 17:12:24.350323 | orchestrator | 2025-05-28 17:12:24 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:12:27.407680 | orchestrator | 2025-05-28 17:12:27 | INFO  | Task a709d64a-03a4-4354-9977-e19f0194bf73 is in state STARTED 2025-05-28 17:12:27.410100 | orchestrator | 2025-05-28 17:12:27 | INFO  | Task 9dbb75ed-1f68-41be-afd3-4273c9c8cdb8 is in state STARTED 2025-05-28 17:12:27.413899 | orchestrator | 2025-05-28 17:12:27 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:12:27.418710 | orchestrator | 2025-05-28 17:12:27 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:12:27.420090 | orchestrator | 2025-05-28 17:12:27 | INFO  | Task 1273927c-989e-4a38-8d59-43fa1848ade1 is in state STARTED 2025-05-28 17:12:27.420362 | orchestrator | 2025-05-28 17:12:27 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:12:30.491007 | orchestrator | 2025-05-28 17:12:30 | INFO  | Task a709d64a-03a4-4354-9977-e19f0194bf73 is in state STARTED 2025-05-28 17:12:30.496697 | orchestrator | 2025-05-28 17:12:30 | INFO  | Task 9dbb75ed-1f68-41be-afd3-4273c9c8cdb8 is in state STARTED 2025-05-28 17:12:30.496891 | orchestrator | 2025-05-28 17:12:30 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:12:30.501150 | orchestrator | 2025-05-28 17:12:30 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:12:30.502764 | orchestrator | 2025-05-28 17:12:30 | INFO  | Task 1273927c-989e-4a38-8d59-43fa1848ade1 is in state STARTED 2025-05-28 17:12:30.502790 | orchestrator | 2025-05-28 17:12:30 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:12:33.566005 | orchestrator | 2025-05-28 17:12:33 | INFO  | Task a709d64a-03a4-4354-9977-e19f0194bf73 is in state STARTED 2025-05-28 17:12:33.569393 | orchestrator | 2025-05-28 17:12:33 | INFO  | Task 9dbb75ed-1f68-41be-afd3-4273c9c8cdb8 is in state STARTED 2025-05-28 17:12:33.573440 | orchestrator | 2025-05-28 17:12:33 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:12:33.574142 | orchestrator | 2025-05-28 17:12:33 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:12:33.575969 | orchestrator | 2025-05-28 17:12:33 | INFO  | Task 1273927c-989e-4a38-8d59-43fa1848ade1 is in state STARTED 2025-05-28 17:12:33.575994 | orchestrator | 2025-05-28 17:12:33 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:12:36.628314 | orchestrator | 2025-05-28 17:12:36.628439 | orchestrator | 2025-05-28 17:12:36.628455 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-05-28 17:12:36.628469 | orchestrator | 2025-05-28 17:12:36.628481 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-05-28 17:12:36.628493 | orchestrator | Wednesday 28 May 2025 17:11:29 +0000 (0:00:00.599) 0:00:00.599 ********* 2025-05-28 17:12:36.628504 | orchestrator | ok: [testbed-manager] => { 2025-05-28 17:12:36.628517 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-05-28 17:12:36.628531 | orchestrator | } 2025-05-28 17:12:36.628543 | orchestrator | 2025-05-28 17:12:36.628554 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-05-28 17:12:36.628565 | orchestrator | Wednesday 28 May 2025 17:11:30 +0000 (0:00:00.297) 0:00:00.896 ********* 2025-05-28 17:12:36.628576 | orchestrator | ok: [testbed-manager] 2025-05-28 17:12:36.628588 | orchestrator | 2025-05-28 17:12:36.628619 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-05-28 17:12:36.628630 | orchestrator | Wednesday 28 May 2025 17:11:31 +0000 (0:00:01.429) 0:00:02.326 ********* 2025-05-28 17:12:36.628642 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-05-28 17:12:36.628653 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-05-28 17:12:36.628664 | orchestrator | 2025-05-28 17:12:36.628675 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-05-28 17:12:36.628685 | orchestrator | Wednesday 28 May 2025 17:11:32 +0000 (0:00:01.204) 0:00:03.531 ********* 2025-05-28 17:12:36.628696 | orchestrator | changed: [testbed-manager] 2025-05-28 17:12:36.628709 | orchestrator | 2025-05-28 17:12:36.628727 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-05-28 17:12:36.628745 | orchestrator | Wednesday 28 May 2025 17:11:35 +0000 (0:00:02.718) 0:00:06.250 ********* 2025-05-28 17:12:36.628764 | orchestrator | changed: [testbed-manager] 2025-05-28 17:12:36.628783 | orchestrator | 2025-05-28 17:12:36.628883 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-05-28 17:12:36.628907 | orchestrator | Wednesday 28 May 2025 17:11:36 +0000 (0:00:01.402) 0:00:07.652 ********* 2025-05-28 17:12:36.628929 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-05-28 17:12:36.628949 | orchestrator | ok: [testbed-manager] 2025-05-28 17:12:36.628961 | orchestrator | 2025-05-28 17:12:36.628974 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-05-28 17:12:36.628987 | orchestrator | Wednesday 28 May 2025 17:12:03 +0000 (0:00:26.291) 0:00:33.944 ********* 2025-05-28 17:12:36.628999 | orchestrator | changed: [testbed-manager] 2025-05-28 17:12:36.629011 | orchestrator | 2025-05-28 17:12:36.629023 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 17:12:36.629036 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 17:12:36.629050 | orchestrator | 2025-05-28 17:12:36.629088 | orchestrator | 2025-05-28 17:12:36.629100 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 17:12:36.629112 | orchestrator | Wednesday 28 May 2025 17:12:04 +0000 (0:00:01.308) 0:00:35.252 ********* 2025-05-28 17:12:36.629124 | orchestrator | =============================================================================== 2025-05-28 17:12:36.629136 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 26.29s 2025-05-28 17:12:36.629148 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.72s 2025-05-28 17:12:36.629159 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.43s 2025-05-28 17:12:36.629169 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.40s 2025-05-28 17:12:36.629180 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 1.31s 2025-05-28 17:12:36.629190 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.20s 2025-05-28 17:12:36.629201 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.30s 2025-05-28 17:12:36.629211 | orchestrator | 2025-05-28 17:12:36.629222 | orchestrator | 2025-05-28 17:12:36.629233 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-05-28 17:12:36.629245 | orchestrator | 2025-05-28 17:12:36.629263 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-05-28 17:12:36.629294 | orchestrator | Wednesday 28 May 2025 17:11:28 +0000 (0:00:00.294) 0:00:00.294 ********* 2025-05-28 17:12:36.629314 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-05-28 17:12:36.629336 | orchestrator | 2025-05-28 17:12:36.629356 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-05-28 17:12:36.629367 | orchestrator | Wednesday 28 May 2025 17:11:28 +0000 (0:00:00.346) 0:00:00.640 ********* 2025-05-28 17:12:36.629378 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-05-28 17:12:36.629389 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-05-28 17:12:36.629400 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-05-28 17:12:36.629411 | orchestrator | 2025-05-28 17:12:36.629421 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-05-28 17:12:36.629432 | orchestrator | Wednesday 28 May 2025 17:11:30 +0000 (0:00:01.490) 0:00:02.131 ********* 2025-05-28 17:12:36.629442 | orchestrator | changed: [testbed-manager] 2025-05-28 17:12:36.629453 | orchestrator | 2025-05-28 17:12:36.629463 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-05-28 17:12:36.629474 | orchestrator | Wednesday 28 May 2025 17:11:32 +0000 (0:00:02.148) 0:00:04.279 ********* 2025-05-28 17:12:36.629504 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-05-28 17:12:36.629516 | orchestrator | ok: [testbed-manager] 2025-05-28 17:12:36.629527 | orchestrator | 2025-05-28 17:12:36.629538 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-05-28 17:12:36.629548 | orchestrator | Wednesday 28 May 2025 17:12:10 +0000 (0:00:38.427) 0:00:42.707 ********* 2025-05-28 17:12:36.629559 | orchestrator | changed: [testbed-manager] 2025-05-28 17:12:36.629569 | orchestrator | 2025-05-28 17:12:36.629580 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-05-28 17:12:36.629591 | orchestrator | Wednesday 28 May 2025 17:12:11 +0000 (0:00:01.182) 0:00:43.889 ********* 2025-05-28 17:12:36.629601 | orchestrator | ok: [testbed-manager] 2025-05-28 17:12:36.629612 | orchestrator | 2025-05-28 17:12:36.629623 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-05-28 17:12:36.629633 | orchestrator | Wednesday 28 May 2025 17:12:12 +0000 (0:00:00.761) 0:00:44.650 ********* 2025-05-28 17:12:36.629644 | orchestrator | changed: [testbed-manager] 2025-05-28 17:12:36.629654 | orchestrator | 2025-05-28 17:12:36.629665 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-05-28 17:12:36.629685 | orchestrator | Wednesday 28 May 2025 17:12:14 +0000 (0:00:01.928) 0:00:46.579 ********* 2025-05-28 17:12:36.629696 | orchestrator | changed: [testbed-manager] 2025-05-28 17:12:36.629706 | orchestrator | 2025-05-28 17:12:36.629717 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-05-28 17:12:36.629728 | orchestrator | Wednesday 28 May 2025 17:12:15 +0000 (0:00:00.975) 0:00:47.554 ********* 2025-05-28 17:12:36.629738 | orchestrator | changed: [testbed-manager] 2025-05-28 17:12:36.629749 | orchestrator | 2025-05-28 17:12:36.629760 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-05-28 17:12:36.629771 | orchestrator | Wednesday 28 May 2025 17:12:16 +0000 (0:00:00.690) 0:00:48.246 ********* 2025-05-28 17:12:36.629781 | orchestrator | ok: [testbed-manager] 2025-05-28 17:12:36.629792 | orchestrator | 2025-05-28 17:12:36.629802 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 17:12:36.629813 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 17:12:36.629824 | orchestrator | 2025-05-28 17:12:36.629835 | orchestrator | 2025-05-28 17:12:36.629912 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 17:12:36.629927 | orchestrator | Wednesday 28 May 2025 17:12:16 +0000 (0:00:00.584) 0:00:48.830 ********* 2025-05-28 17:12:36.629937 | orchestrator | =============================================================================== 2025-05-28 17:12:36.629948 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 38.43s 2025-05-28 17:12:36.629958 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.15s 2025-05-28 17:12:36.629968 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.93s 2025-05-28 17:12:36.629977 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.49s 2025-05-28 17:12:36.629987 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.18s 2025-05-28 17:12:36.629996 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.98s 2025-05-28 17:12:36.630005 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.76s 2025-05-28 17:12:36.630015 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.69s 2025-05-28 17:12:36.630090 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.58s 2025-05-28 17:12:36.630100 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.35s 2025-05-28 17:12:36.630109 | orchestrator | 2025-05-28 17:12:36.630119 | orchestrator | 2025-05-28 17:12:36.630182 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-28 17:12:36.630192 | orchestrator | 2025-05-28 17:12:36.630202 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-28 17:12:36.630212 | orchestrator | Wednesday 28 May 2025 17:11:29 +0000 (0:00:00.650) 0:00:00.651 ********* 2025-05-28 17:12:36.630222 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-05-28 17:12:36.630231 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-05-28 17:12:36.630241 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-05-28 17:12:36.630251 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-05-28 17:12:36.630260 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-05-28 17:12:36.630270 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-05-28 17:12:36.630280 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-05-28 17:12:36.630289 | orchestrator | 2025-05-28 17:12:36.630301 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-05-28 17:12:36.630318 | orchestrator | 2025-05-28 17:12:36.630335 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-05-28 17:12:36.630352 | orchestrator | Wednesday 28 May 2025 17:11:31 +0000 (0:00:01.774) 0:00:02.425 ********* 2025-05-28 17:12:36.630431 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 17:12:36.630456 | orchestrator | 2025-05-28 17:12:36.630467 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-05-28 17:12:36.630476 | orchestrator | Wednesday 28 May 2025 17:11:34 +0000 (0:00:02.750) 0:00:05.175 ********* 2025-05-28 17:12:36.630486 | orchestrator | ok: [testbed-manager] 2025-05-28 17:12:36.630495 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:12:36.630505 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:12:36.630515 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:12:36.630524 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:12:36.630545 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:12:36.630555 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:12:36.630564 | orchestrator | 2025-05-28 17:12:36.630574 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-05-28 17:12:36.630584 | orchestrator | Wednesday 28 May 2025 17:11:36 +0000 (0:00:02.298) 0:00:07.474 ********* 2025-05-28 17:12:36.630594 | orchestrator | ok: [testbed-manager] 2025-05-28 17:12:36.630603 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:12:36.630613 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:12:36.630622 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:12:36.630632 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:12:36.630641 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:12:36.630650 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:12:36.630660 | orchestrator | 2025-05-28 17:12:36.630669 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-05-28 17:12:36.630679 | orchestrator | Wednesday 28 May 2025 17:11:40 +0000 (0:00:04.082) 0:00:11.556 ********* 2025-05-28 17:12:36.630689 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:12:36.630698 | orchestrator | changed: [testbed-manager] 2025-05-28 17:12:36.630708 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:12:36.630717 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:12:36.630727 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:12:36.630736 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:12:36.630745 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:12:36.630755 | orchestrator | 2025-05-28 17:12:36.630764 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-05-28 17:12:36.630774 | orchestrator | Wednesday 28 May 2025 17:11:43 +0000 (0:00:03.140) 0:00:14.697 ********* 2025-05-28 17:12:36.630784 | orchestrator | changed: [testbed-manager] 2025-05-28 17:12:36.630793 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:12:36.630802 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:12:36.630812 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:12:36.630821 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:12:36.630831 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:12:36.630840 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:12:36.630871 | orchestrator | 2025-05-28 17:12:36.630882 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-05-28 17:12:36.630891 | orchestrator | Wednesday 28 May 2025 17:11:55 +0000 (0:00:11.227) 0:00:25.924 ********* 2025-05-28 17:12:36.630901 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:12:36.630911 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:12:36.630920 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:12:36.630930 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:12:36.630939 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:12:36.630948 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:12:36.630958 | orchestrator | changed: [testbed-manager] 2025-05-28 17:12:36.630967 | orchestrator | 2025-05-28 17:12:36.630977 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-05-28 17:12:36.630987 | orchestrator | Wednesday 28 May 2025 17:12:12 +0000 (0:00:17.626) 0:00:43.551 ********* 2025-05-28 17:12:36.630997 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 17:12:36.631017 | orchestrator | 2025-05-28 17:12:36.631026 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-05-28 17:12:36.631036 | orchestrator | Wednesday 28 May 2025 17:12:14 +0000 (0:00:02.133) 0:00:45.684 ********* 2025-05-28 17:12:36.631045 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-05-28 17:12:36.631055 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-05-28 17:12:36.631065 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-05-28 17:12:36.631074 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-05-28 17:12:36.631084 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-05-28 17:12:36.631094 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-05-28 17:12:36.631103 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-05-28 17:12:36.631113 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-05-28 17:12:36.631122 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-05-28 17:12:36.631170 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-05-28 17:12:36.631181 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-05-28 17:12:36.631190 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-05-28 17:12:36.631199 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-05-28 17:12:36.631213 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-05-28 17:12:36.631222 | orchestrator | 2025-05-28 17:12:36.631232 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-05-28 17:12:36.631242 | orchestrator | Wednesday 28 May 2025 17:12:19 +0000 (0:00:04.569) 0:00:50.253 ********* 2025-05-28 17:12:36.631251 | orchestrator | ok: [testbed-manager] 2025-05-28 17:12:36.631261 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:12:36.631270 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:12:36.631279 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:12:36.631289 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:12:36.631298 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:12:36.631307 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:12:36.631316 | orchestrator | 2025-05-28 17:12:36.631326 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-05-28 17:12:36.631338 | orchestrator | Wednesday 28 May 2025 17:12:20 +0000 (0:00:01.047) 0:00:51.301 ********* 2025-05-28 17:12:36.631355 | orchestrator | changed: [testbed-manager] 2025-05-28 17:12:36.631371 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:12:36.631388 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:12:36.631404 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:12:36.631421 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:12:36.631437 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:12:36.631450 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:12:36.631459 | orchestrator | 2025-05-28 17:12:36.631469 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-05-28 17:12:36.631486 | orchestrator | Wednesday 28 May 2025 17:12:22 +0000 (0:00:01.936) 0:00:53.237 ********* 2025-05-28 17:12:36.631496 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:12:36.631505 | orchestrator | ok: [testbed-manager] 2025-05-28 17:12:36.631514 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:12:36.631524 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:12:36.631533 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:12:36.631543 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:12:36.631552 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:12:36.631561 | orchestrator | 2025-05-28 17:12:36.631571 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-05-28 17:12:36.631580 | orchestrator | Wednesday 28 May 2025 17:12:24 +0000 (0:00:02.054) 0:00:55.291 ********* 2025-05-28 17:12:36.631590 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:12:36.631608 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:12:36.631618 | orchestrator | ok: [testbed-manager] 2025-05-28 17:12:36.631627 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:12:36.631636 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:12:36.631645 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:12:36.631655 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:12:36.631664 | orchestrator | 2025-05-28 17:12:36.631673 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-05-28 17:12:36.631683 | orchestrator | Wednesday 28 May 2025 17:12:26 +0000 (0:00:02.193) 0:00:57.485 ********* 2025-05-28 17:12:36.631692 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-05-28 17:12:36.631704 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 17:12:36.631714 | orchestrator | 2025-05-28 17:12:36.631723 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-05-28 17:12:36.631733 | orchestrator | Wednesday 28 May 2025 17:12:28 +0000 (0:00:01.880) 0:00:59.365 ********* 2025-05-28 17:12:36.631742 | orchestrator | changed: [testbed-manager] 2025-05-28 17:12:36.631752 | orchestrator | 2025-05-28 17:12:36.631761 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-05-28 17:12:36.631771 | orchestrator | Wednesday 28 May 2025 17:12:30 +0000 (0:00:02.210) 0:01:01.575 ********* 2025-05-28 17:12:36.631780 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:12:36.631789 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:12:36.631799 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:12:36.631808 | orchestrator | changed: [testbed-manager] 2025-05-28 17:12:36.631817 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:12:36.631827 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:12:36.631836 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:12:36.631902 | orchestrator | 2025-05-28 17:12:36.631914 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 17:12:36.631924 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 17:12:36.631934 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 17:12:36.631944 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 17:12:36.631954 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 17:12:36.631963 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 17:12:36.631973 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 17:12:36.631982 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 17:12:36.631991 | orchestrator | 2025-05-28 17:12:36.632001 | orchestrator | 2025-05-28 17:12:36.632010 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 17:12:36.632020 | orchestrator | Wednesday 28 May 2025 17:12:34 +0000 (0:00:03.631) 0:01:05.206 ********* 2025-05-28 17:12:36.632035 | orchestrator | =============================================================================== 2025-05-28 17:12:36.632045 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 17.63s 2025-05-28 17:12:36.632054 | orchestrator | osism.services.netdata : Add repository -------------------------------- 11.23s 2025-05-28 17:12:36.632071 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 4.57s 2025-05-28 17:12:36.632080 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 4.08s 2025-05-28 17:12:36.632089 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.63s 2025-05-28 17:12:36.632099 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 3.14s 2025-05-28 17:12:36.632108 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 2.75s 2025-05-28 17:12:36.632117 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.30s 2025-05-28 17:12:36.632127 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.21s 2025-05-28 17:12:36.632136 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.19s 2025-05-28 17:12:36.632145 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 2.13s 2025-05-28 17:12:36.632161 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 2.05s 2025-05-28 17:12:36.632171 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.94s 2025-05-28 17:12:36.632181 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.88s 2025-05-28 17:12:36.632190 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.77s 2025-05-28 17:12:36.632199 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.05s 2025-05-28 17:12:36.632209 | orchestrator | 2025-05-28 17:12:36 | INFO  | Task a709d64a-03a4-4354-9977-e19f0194bf73 is in state STARTED 2025-05-28 17:12:36.632219 | orchestrator | 2025-05-28 17:12:36 | INFO  | Task 9dbb75ed-1f68-41be-afd3-4273c9c8cdb8 is in state SUCCESS 2025-05-28 17:12:36.632228 | orchestrator | 2025-05-28 17:12:36 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:12:36.632238 | orchestrator | 2025-05-28 17:12:36 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:12:36.632247 | orchestrator | 2025-05-28 17:12:36 | INFO  | Task 1273927c-989e-4a38-8d59-43fa1848ade1 is in state STARTED 2025-05-28 17:12:36.632257 | orchestrator | 2025-05-28 17:12:36 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:12:39.667570 | orchestrator | 2025-05-28 17:12:39 | INFO  | Task a709d64a-03a4-4354-9977-e19f0194bf73 is in state STARTED 2025-05-28 17:12:39.671737 | orchestrator | 2025-05-28 17:12:39 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:12:39.673441 | orchestrator | 2025-05-28 17:12:39 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:12:39.674917 | orchestrator | 2025-05-28 17:12:39 | INFO  | Task 1273927c-989e-4a38-8d59-43fa1848ade1 is in state STARTED 2025-05-28 17:12:39.675864 | orchestrator | 2025-05-28 17:12:39 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:12:42.715985 | orchestrator | 2025-05-28 17:12:42 | INFO  | Task a709d64a-03a4-4354-9977-e19f0194bf73 is in state STARTED 2025-05-28 17:12:42.716641 | orchestrator | 2025-05-28 17:12:42 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:12:42.719578 | orchestrator | 2025-05-28 17:12:42 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:12:42.722815 | orchestrator | 2025-05-28 17:12:42 | INFO  | Task 1273927c-989e-4a38-8d59-43fa1848ade1 is in state STARTED 2025-05-28 17:12:42.722874 | orchestrator | 2025-05-28 17:12:42 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:12:45.765450 | orchestrator | 2025-05-28 17:12:45 | INFO  | Task a709d64a-03a4-4354-9977-e19f0194bf73 is in state STARTED 2025-05-28 17:12:45.765591 | orchestrator | 2025-05-28 17:12:45 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:12:45.766374 | orchestrator | 2025-05-28 17:12:45 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:12:45.767397 | orchestrator | 2025-05-28 17:12:45 | INFO  | Task 1273927c-989e-4a38-8d59-43fa1848ade1 is in state STARTED 2025-05-28 17:12:45.767418 | orchestrator | 2025-05-28 17:12:45 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:12:48.813182 | orchestrator | 2025-05-28 17:12:48 | INFO  | Task a709d64a-03a4-4354-9977-e19f0194bf73 is in state STARTED 2025-05-28 17:12:48.813298 | orchestrator | 2025-05-28 17:12:48 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:12:48.813334 | orchestrator | 2025-05-28 17:12:48 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:12:48.816553 | orchestrator | 2025-05-28 17:12:48 | INFO  | Task 1273927c-989e-4a38-8d59-43fa1848ade1 is in state STARTED 2025-05-28 17:12:48.816586 | orchestrator | 2025-05-28 17:12:48 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:12:51.848612 | orchestrator | 2025-05-28 17:12:51 | INFO  | Task a709d64a-03a4-4354-9977-e19f0194bf73 is in state STARTED 2025-05-28 17:12:51.849417 | orchestrator | 2025-05-28 17:12:51 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:12:51.851067 | orchestrator | 2025-05-28 17:12:51 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:12:51.851950 | orchestrator | 2025-05-28 17:12:51 | INFO  | Task 1273927c-989e-4a38-8d59-43fa1848ade1 is in state STARTED 2025-05-28 17:12:51.852111 | orchestrator | 2025-05-28 17:12:51 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:12:54.910292 | orchestrator | 2025-05-28 17:12:54 | INFO  | Task a709d64a-03a4-4354-9977-e19f0194bf73 is in state STARTED 2025-05-28 17:12:54.914477 | orchestrator | 2025-05-28 17:12:54 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:12:54.916664 | orchestrator | 2025-05-28 17:12:54 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:12:54.919044 | orchestrator | 2025-05-28 17:12:54 | INFO  | Task 1273927c-989e-4a38-8d59-43fa1848ade1 is in state STARTED 2025-05-28 17:12:54.919380 | orchestrator | 2025-05-28 17:12:54 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:12:57.971413 | orchestrator | 2025-05-28 17:12:57 | INFO  | Task a709d64a-03a4-4354-9977-e19f0194bf73 is in state STARTED 2025-05-28 17:12:57.973102 | orchestrator | 2025-05-28 17:12:57 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:12:57.976911 | orchestrator | 2025-05-28 17:12:57 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:12:57.981372 | orchestrator | 2025-05-28 17:12:57 | INFO  | Task 1273927c-989e-4a38-8d59-43fa1848ade1 is in state STARTED 2025-05-28 17:12:57.981403 | orchestrator | 2025-05-28 17:12:57 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:13:01.029022 | orchestrator | 2025-05-28 17:13:01 | INFO  | Task a709d64a-03a4-4354-9977-e19f0194bf73 is in state STARTED 2025-05-28 17:13:01.030084 | orchestrator | 2025-05-28 17:13:01 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:13:01.031355 | orchestrator | 2025-05-28 17:13:01 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:13:01.032952 | orchestrator | 2025-05-28 17:13:01 | INFO  | Task 1273927c-989e-4a38-8d59-43fa1848ade1 is in state STARTED 2025-05-28 17:13:01.033044 | orchestrator | 2025-05-28 17:13:01 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:13:04.083465 | orchestrator | 2025-05-28 17:13:04 | INFO  | Task a709d64a-03a4-4354-9977-e19f0194bf73 is in state STARTED 2025-05-28 17:13:04.085630 | orchestrator | 2025-05-28 17:13:04 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:13:04.087637 | orchestrator | 2025-05-28 17:13:04 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:13:04.088623 | orchestrator | 2025-05-28 17:13:04 | INFO  | Task 1273927c-989e-4a38-8d59-43fa1848ade1 is in state STARTED 2025-05-28 17:13:04.089048 | orchestrator | 2025-05-28 17:13:04 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:13:07.142169 | orchestrator | 2025-05-28 17:13:07 | INFO  | Task a709d64a-03a4-4354-9977-e19f0194bf73 is in state STARTED 2025-05-28 17:13:07.144146 | orchestrator | 2025-05-28 17:13:07 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:13:07.146292 | orchestrator | 2025-05-28 17:13:07 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:13:07.148479 | orchestrator | 2025-05-28 17:13:07 | INFO  | Task 1273927c-989e-4a38-8d59-43fa1848ade1 is in state STARTED 2025-05-28 17:13:07.148502 | orchestrator | 2025-05-28 17:13:07 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:13:10.221115 | orchestrator | 2025-05-28 17:13:10 | INFO  | Task a709d64a-03a4-4354-9977-e19f0194bf73 is in state STARTED 2025-05-28 17:13:10.221520 | orchestrator | 2025-05-28 17:13:10 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:13:10.222655 | orchestrator | 2025-05-28 17:13:10 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:13:10.223372 | orchestrator | 2025-05-28 17:13:10 | INFO  | Task 1273927c-989e-4a38-8d59-43fa1848ade1 is in state STARTED 2025-05-28 17:13:10.223394 | orchestrator | 2025-05-28 17:13:10 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:13:13.283506 | orchestrator | 2025-05-28 17:13:13 | INFO  | Task a709d64a-03a4-4354-9977-e19f0194bf73 is in state STARTED 2025-05-28 17:13:13.285144 | orchestrator | 2025-05-28 17:13:13 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:13:13.288391 | orchestrator | 2025-05-28 17:13:13 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:13:13.290070 | orchestrator | 2025-05-28 17:13:13 | INFO  | Task 1273927c-989e-4a38-8d59-43fa1848ade1 is in state STARTED 2025-05-28 17:13:13.290103 | orchestrator | 2025-05-28 17:13:13 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:13:16.343252 | orchestrator | 2025-05-28 17:13:16 | INFO  | Task a709d64a-03a4-4354-9977-e19f0194bf73 is in state STARTED 2025-05-28 17:13:16.344093 | orchestrator | 2025-05-28 17:13:16 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:13:16.345106 | orchestrator | 2025-05-28 17:13:16 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:13:16.346428 | orchestrator | 2025-05-28 17:13:16 | INFO  | Task 1273927c-989e-4a38-8d59-43fa1848ade1 is in state STARTED 2025-05-28 17:13:16.346454 | orchestrator | 2025-05-28 17:13:16 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:13:19.427239 | orchestrator | 2025-05-28 17:13:19 | INFO  | Task a709d64a-03a4-4354-9977-e19f0194bf73 is in state SUCCESS 2025-05-28 17:13:19.433444 | orchestrator | 2025-05-28 17:13:19 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:13:19.436775 | orchestrator | 2025-05-28 17:13:19 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:13:19.439569 | orchestrator | 2025-05-28 17:13:19 | INFO  | Task 1273927c-989e-4a38-8d59-43fa1848ade1 is in state STARTED 2025-05-28 17:13:19.439856 | orchestrator | 2025-05-28 17:13:19 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:13:22.500552 | orchestrator | 2025-05-28 17:13:22 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:13:22.503304 | orchestrator | 2025-05-28 17:13:22 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:13:22.506599 | orchestrator | 2025-05-28 17:13:22 | INFO  | Task 1273927c-989e-4a38-8d59-43fa1848ade1 is in state STARTED 2025-05-28 17:13:22.507015 | orchestrator | 2025-05-28 17:13:22 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:13:25.576614 | orchestrator | 2025-05-28 17:13:25 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:13:25.577432 | orchestrator | 2025-05-28 17:13:25 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:13:25.579443 | orchestrator | 2025-05-28 17:13:25 | INFO  | Task 1273927c-989e-4a38-8d59-43fa1848ade1 is in state STARTED 2025-05-28 17:13:25.579767 | orchestrator | 2025-05-28 17:13:25 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:13:28.636713 | orchestrator | 2025-05-28 17:13:28 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:13:28.637822 | orchestrator | 2025-05-28 17:13:28 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:13:28.639672 | orchestrator | 2025-05-28 17:13:28 | INFO  | Task 1273927c-989e-4a38-8d59-43fa1848ade1 is in state STARTED 2025-05-28 17:13:28.640380 | orchestrator | 2025-05-28 17:13:28 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:13:31.679739 | orchestrator | 2025-05-28 17:13:31 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:13:31.680287 | orchestrator | 2025-05-28 17:13:31 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:13:31.682259 | orchestrator | 2025-05-28 17:13:31 | INFO  | Task 1273927c-989e-4a38-8d59-43fa1848ade1 is in state STARTED 2025-05-28 17:13:31.682287 | orchestrator | 2025-05-28 17:13:31 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:13:34.723528 | orchestrator | 2025-05-28 17:13:34 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:13:34.724281 | orchestrator | 2025-05-28 17:13:34 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:13:34.725165 | orchestrator | 2025-05-28 17:13:34 | INFO  | Task 1273927c-989e-4a38-8d59-43fa1848ade1 is in state STARTED 2025-05-28 17:13:34.725202 | orchestrator | 2025-05-28 17:13:34 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:13:37.775755 | orchestrator | 2025-05-28 17:13:37 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:13:37.778297 | orchestrator | 2025-05-28 17:13:37 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:13:37.778343 | orchestrator | 2025-05-28 17:13:37 | INFO  | Task 1273927c-989e-4a38-8d59-43fa1848ade1 is in state STARTED 2025-05-28 17:13:37.778355 | orchestrator | 2025-05-28 17:13:37 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:13:40.830719 | orchestrator | 2025-05-28 17:13:40 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:13:40.832928 | orchestrator | 2025-05-28 17:13:40 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:13:40.833697 | orchestrator | 2025-05-28 17:13:40 | INFO  | Task 1273927c-989e-4a38-8d59-43fa1848ade1 is in state STARTED 2025-05-28 17:13:40.834398 | orchestrator | 2025-05-28 17:13:40 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:13:43.879681 | orchestrator | 2025-05-28 17:13:43 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:13:43.880038 | orchestrator | 2025-05-28 17:13:43 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:13:43.881037 | orchestrator | 2025-05-28 17:13:43 | INFO  | Task 1273927c-989e-4a38-8d59-43fa1848ade1 is in state STARTED 2025-05-28 17:13:43.881339 | orchestrator | 2025-05-28 17:13:43 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:13:46.934258 | orchestrator | 2025-05-28 17:13:46 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:13:46.937103 | orchestrator | 2025-05-28 17:13:46 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:13:46.937138 | orchestrator | 2025-05-28 17:13:46 | INFO  | Task 1273927c-989e-4a38-8d59-43fa1848ade1 is in state STARTED 2025-05-28 17:13:46.937152 | orchestrator | 2025-05-28 17:13:46 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:13:49.994561 | orchestrator | 2025-05-28 17:13:49 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:13:49.995525 | orchestrator | 2025-05-28 17:13:49 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:13:49.999937 | orchestrator | 2025-05-28 17:13:49 | INFO  | Task 1273927c-989e-4a38-8d59-43fa1848ade1 is in state STARTED 2025-05-28 17:13:49.999988 | orchestrator | 2025-05-28 17:13:49 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:13:53.051886 | orchestrator | 2025-05-28 17:13:53 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:13:53.052963 | orchestrator | 2025-05-28 17:13:53 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:13:53.054895 | orchestrator | 2025-05-28 17:13:53 | INFO  | Task 1273927c-989e-4a38-8d59-43fa1848ade1 is in state STARTED 2025-05-28 17:13:53.055047 | orchestrator | 2025-05-28 17:13:53 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:13:56.100992 | orchestrator | 2025-05-28 17:13:56 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:13:56.101650 | orchestrator | 2025-05-28 17:13:56 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:13:56.103624 | orchestrator | 2025-05-28 17:13:56 | INFO  | Task 1273927c-989e-4a38-8d59-43fa1848ade1 is in state STARTED 2025-05-28 17:13:56.103654 | orchestrator | 2025-05-28 17:13:56 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:13:59.153392 | orchestrator | 2025-05-28 17:13:59 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:13:59.153532 | orchestrator | 2025-05-28 17:13:59 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:13:59.153548 | orchestrator | 2025-05-28 17:13:59 | INFO  | Task 1273927c-989e-4a38-8d59-43fa1848ade1 is in state STARTED 2025-05-28 17:13:59.153561 | orchestrator | 2025-05-28 17:13:59 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:14:02.208521 | orchestrator | 2025-05-28 17:14:02 | INFO  | Task bfaa9d9f-71b3-4ac9-aa05-b0d00637635e is in state STARTED 2025-05-28 17:14:02.208656 | orchestrator | 2025-05-28 17:14:02 | INFO  | Task 9fd50cfe-7db8-4d21-aad0-4cf1fd7217dc is in state STARTED 2025-05-28 17:14:02.212930 | orchestrator | 2025-05-28 17:14:02 | INFO  | Task 6a852715-df3c-432e-bb79-deeb7e8c5f47 is in state STARTED 2025-05-28 17:14:02.213015 | orchestrator | 2025-05-28 17:14:02 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:14:02.213720 | orchestrator | 2025-05-28 17:14:02 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:14:02.217102 | orchestrator | 2025-05-28 17:14:02 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:14:02.222392 | orchestrator | 2025-05-28 17:14:02 | INFO  | Task 1273927c-989e-4a38-8d59-43fa1848ade1 is in state SUCCESS 2025-05-28 17:14:02.222552 | orchestrator | 2025-05-28 17:14:02.222574 | orchestrator | 2025-05-28 17:14:02.222587 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-05-28 17:14:02.222599 | orchestrator | 2025-05-28 17:14:02.222610 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-05-28 17:14:02.222621 | orchestrator | Wednesday 28 May 2025 17:11:52 +0000 (0:00:00.230) 0:00:00.230 ********* 2025-05-28 17:14:02.222632 | orchestrator | ok: [testbed-manager] 2025-05-28 17:14:02.222643 | orchestrator | 2025-05-28 17:14:02.222654 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-05-28 17:14:02.222665 | orchestrator | Wednesday 28 May 2025 17:11:52 +0000 (0:00:00.813) 0:00:01.044 ********* 2025-05-28 17:14:02.222677 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-05-28 17:14:02.222688 | orchestrator | 2025-05-28 17:14:02.222699 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-05-28 17:14:02.222710 | orchestrator | Wednesday 28 May 2025 17:11:53 +0000 (0:00:00.547) 0:00:01.591 ********* 2025-05-28 17:14:02.222720 | orchestrator | changed: [testbed-manager] 2025-05-28 17:14:02.222775 | orchestrator | 2025-05-28 17:14:02.222786 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-05-28 17:14:02.222797 | orchestrator | Wednesday 28 May 2025 17:11:54 +0000 (0:00:01.335) 0:00:02.926 ********* 2025-05-28 17:14:02.222808 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-05-28 17:14:02.222819 | orchestrator | ok: [testbed-manager] 2025-05-28 17:14:02.222830 | orchestrator | 2025-05-28 17:14:02.222840 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-05-28 17:14:02.222851 | orchestrator | Wednesday 28 May 2025 17:13:13 +0000 (0:01:18.883) 0:01:21.810 ********* 2025-05-28 17:14:02.222861 | orchestrator | changed: [testbed-manager] 2025-05-28 17:14:02.222872 | orchestrator | 2025-05-28 17:14:02.222883 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 17:14:02.222894 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 17:14:02.222908 | orchestrator | 2025-05-28 17:14:02.222918 | orchestrator | 2025-05-28 17:14:02.222929 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 17:14:02.222940 | orchestrator | Wednesday 28 May 2025 17:13:17 +0000 (0:00:03.627) 0:01:25.437 ********* 2025-05-28 17:14:02.222951 | orchestrator | =============================================================================== 2025-05-28 17:14:02.222962 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 78.88s 2025-05-28 17:14:02.222973 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 3.63s 2025-05-28 17:14:02.222984 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.34s 2025-05-28 17:14:02.222994 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 0.81s 2025-05-28 17:14:02.223005 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.55s 2025-05-28 17:14:02.223016 | orchestrator | 2025-05-28 17:14:02.226424 | orchestrator | 2025-05-28 17:14:02.226470 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-05-28 17:14:02.226483 | orchestrator | 2025-05-28 17:14:02.226494 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-05-28 17:14:02.226504 | orchestrator | Wednesday 28 May 2025 17:11:22 +0000 (0:00:00.276) 0:00:00.276 ********* 2025-05-28 17:14:02.226540 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 17:14:02.226553 | orchestrator | 2025-05-28 17:14:02.226564 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-05-28 17:14:02.226574 | orchestrator | Wednesday 28 May 2025 17:11:24 +0000 (0:00:01.290) 0:00:01.566 ********* 2025-05-28 17:14:02.226652 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-28 17:14:02.226665 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-28 17:14:02.226676 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-28 17:14:02.226687 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-28 17:14:02.226698 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-28 17:14:02.226709 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-28 17:14:02.226720 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-28 17:14:02.226762 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-28 17:14:02.226775 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-28 17:14:02.226787 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-28 17:14:02.226798 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-28 17:14:02.226809 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-28 17:14:02.226820 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-28 17:14:02.226831 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-28 17:14:02.226842 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-28 17:14:02.226852 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-28 17:14:02.226863 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-28 17:14:02.226874 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-28 17:14:02.226885 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-28 17:14:02.226900 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-28 17:14:02.226920 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-28 17:14:02.226938 | orchestrator | 2025-05-28 17:14:02.226953 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-05-28 17:14:02.226963 | orchestrator | Wednesday 28 May 2025 17:11:28 +0000 (0:00:04.464) 0:00:06.031 ********* 2025-05-28 17:14:02.226975 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 17:14:02.226988 | orchestrator | 2025-05-28 17:14:02.227000 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-05-28 17:14:02.227024 | orchestrator | Wednesday 28 May 2025 17:11:30 +0000 (0:00:01.460) 0:00:07.491 ********* 2025-05-28 17:14:02.227042 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-28 17:14:02.227069 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-28 17:14:02.227134 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-28 17:14:02.227149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-28 17:14:02.227167 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:14:02.227182 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-28 17:14:02.227195 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:14:02.227209 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-28 17:14:02.227230 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:14:02.227258 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:14:02.227271 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-28 17:14:02.227285 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:14:02.227317 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:14:02.227359 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:14:02.227371 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:14:02.227383 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:14:02.227401 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:14:02.227420 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:14:02.227433 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:14:02.227444 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:14:02.227460 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:14:02.227471 | orchestrator | 2025-05-28 17:14:02.227483 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-05-28 17:14:02.227494 | orchestrator | Wednesday 28 May 2025 17:11:35 +0000 (0:00:05.012) 0:00:12.503 ********* 2025-05-28 17:14:02.227506 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-28 17:14:02.227518 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:14:02.227535 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:14:02.227547 | orchestrator | skipping: [testbed-manager] 2025-05-28 17:14:02.227558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-28 17:14:02.227575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:14:02.227587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:14:02.227598 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:14:02.227609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-28 17:14:02.227625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:14:02.227637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:14:02.227648 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:14:02.227665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-28 17:14:02.227676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:14:02.227688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:14:02.227699 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:14:02.227715 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-28 17:14:02.227766 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:14:02.227781 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:14:02.227800 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:14:02.227827 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-28 17:14:02.227855 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:14:02.227899 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:14:02.227919 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:14:02.227938 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-28 17:14:02.227969 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:14:02.227989 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:14:02.228008 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:14:02.228028 | orchestrator | 2025-05-28 17:14:02.228048 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-05-28 17:14:02.228068 | orchestrator | Wednesday 28 May 2025 17:11:36 +0000 (0:00:01.485) 0:00:13.989 ********* 2025-05-28 17:14:02.228088 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-28 17:14:02.228118 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:14:02.228151 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:14:02.228170 | orchestrator | skipping: [testbed-manager] 2025-05-28 17:14:02.228189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-28 17:14:02.228247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:14:02.228268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:14:02.228287 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:14:02.228370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-28 17:14:02.228395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:14:02.228416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:14:02.228436 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:14:02.228454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-28 17:14:02.228494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:14:02.228514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:14:02.228534 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-28 17:14:02.228553 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:14:02.228590 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:14:02.228611 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:14:02.228632 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:14:02.228650 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-28 17:14:02.228676 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:14:02.228706 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:14:02.228725 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:14:02.228773 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-28 17:14:02.228793 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:14:02.228812 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:14:02.228830 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:14:02.228848 | orchestrator | 2025-05-28 17:14:02.228867 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-05-28 17:14:02.228887 | orchestrator | Wednesday 28 May 2025 17:11:39 +0000 (0:00:03.218) 0:00:17.207 ********* 2025-05-28 17:14:02.228905 | orchestrator | skipping: [testbed-manager] 2025-05-28 17:14:02.228925 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:14:02.228944 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:14:02.228963 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:14:02.228981 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:14:02.229010 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:14:02.229029 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:14:02.229049 | orchestrator | 2025-05-28 17:14:02.229068 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-05-28 17:14:02.229089 | orchestrator | Wednesday 28 May 2025 17:11:41 +0000 (0:00:01.706) 0:00:18.914 ********* 2025-05-28 17:14:02.229108 | orchestrator | skipping: [testbed-manager] 2025-05-28 17:14:02.229126 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:14:02.229145 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:14:02.229163 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:14:02.229180 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:14:02.229199 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:14:02.229233 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:14:02.229253 | orchestrator | 2025-05-28 17:14:02.229271 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-05-28 17:14:02.229290 | orchestrator | Wednesday 28 May 2025 17:11:43 +0000 (0:00:01.918) 0:00:20.832 ********* 2025-05-28 17:14:02.229310 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-28 17:14:02.229339 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-28 17:14:02.229360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-28 17:14:02.229381 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:14:02.229401 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-28 17:14:02.229420 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:14:02.229451 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-28 17:14:02.229481 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-28 17:14:02.229500 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:14:02.229527 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-28 17:14:02.229548 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:14:02.229568 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:14:02.229587 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:14:02.229606 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:14:02.229649 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:14:02.229669 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:14:02.229696 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:14:02.229717 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:14:02.229767 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:14:02.229788 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:14:02.229807 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:14:02.229826 | orchestrator | 2025-05-28 17:14:02.229844 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-05-28 17:14:02.229864 | orchestrator | Wednesday 28 May 2025 17:11:49 +0000 (0:00:06.600) 0:00:27.433 ********* 2025-05-28 17:14:02.229883 | orchestrator | [WARNING]: Skipped 2025-05-28 17:14:02.229902 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-05-28 17:14:02.229922 | orchestrator | to this access issue: 2025-05-28 17:14:02.229941 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-05-28 17:14:02.229972 | orchestrator | directory 2025-05-28 17:14:02.229991 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-28 17:14:02.230009 | orchestrator | 2025-05-28 17:14:02.230095 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-05-28 17:14:02.230116 | orchestrator | Wednesday 28 May 2025 17:11:51 +0000 (0:00:01.019) 0:00:28.453 ********* 2025-05-28 17:14:02.230137 | orchestrator | [WARNING]: Skipped 2025-05-28 17:14:02.230156 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-05-28 17:14:02.230187 | orchestrator | to this access issue: 2025-05-28 17:14:02.230207 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-05-28 17:14:02.230227 | orchestrator | directory 2025-05-28 17:14:02.230247 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-28 17:14:02.230266 | orchestrator | 2025-05-28 17:14:02.230285 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-05-28 17:14:02.230304 | orchestrator | Wednesday 28 May 2025 17:11:51 +0000 (0:00:00.775) 0:00:29.228 ********* 2025-05-28 17:14:02.230324 | orchestrator | [WARNING]: Skipped 2025-05-28 17:14:02.230344 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-05-28 17:14:02.230365 | orchestrator | to this access issue: 2025-05-28 17:14:02.230384 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-05-28 17:14:02.230403 | orchestrator | directory 2025-05-28 17:14:02.230424 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-28 17:14:02.230443 | orchestrator | 2025-05-28 17:14:02.230463 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-05-28 17:14:02.230483 | orchestrator | Wednesday 28 May 2025 17:11:52 +0000 (0:00:00.740) 0:00:29.969 ********* 2025-05-28 17:14:02.230501 | orchestrator | [WARNING]: Skipped 2025-05-28 17:14:02.230520 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-05-28 17:14:02.230538 | orchestrator | to this access issue: 2025-05-28 17:14:02.230557 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-05-28 17:14:02.230577 | orchestrator | directory 2025-05-28 17:14:02.230596 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-28 17:14:02.230615 | orchestrator | 2025-05-28 17:14:02.230636 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-05-28 17:14:02.230656 | orchestrator | Wednesday 28 May 2025 17:11:53 +0000 (0:00:00.829) 0:00:30.798 ********* 2025-05-28 17:14:02.230675 | orchestrator | changed: [testbed-manager] 2025-05-28 17:14:02.230694 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:14:02.230713 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:14:02.230757 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:14:02.230777 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:14:02.230795 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:14:02.230823 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:14:02.230844 | orchestrator | 2025-05-28 17:14:02.230862 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-05-28 17:14:02.230882 | orchestrator | Wednesday 28 May 2025 17:11:58 +0000 (0:00:05.276) 0:00:36.074 ********* 2025-05-28 17:14:02.230900 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-28 17:14:02.230918 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-28 17:14:02.230938 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-28 17:14:02.230956 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-28 17:14:02.230974 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-28 17:14:02.230993 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-28 17:14:02.231027 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-28 17:14:02.231046 | orchestrator | 2025-05-28 17:14:02.231065 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-05-28 17:14:02.231084 | orchestrator | Wednesday 28 May 2025 17:12:01 +0000 (0:00:02.994) 0:00:39.068 ********* 2025-05-28 17:14:02.231104 | orchestrator | changed: [testbed-manager] 2025-05-28 17:14:02.231122 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:14:02.231139 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:14:02.231158 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:14:02.231177 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:14:02.231195 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:14:02.231212 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:14:02.231228 | orchestrator | 2025-05-28 17:14:02.231245 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-05-28 17:14:02.231261 | orchestrator | Wednesday 28 May 2025 17:12:04 +0000 (0:00:03.338) 0:00:42.406 ********* 2025-05-28 17:14:02.231280 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-28 17:14:02.231306 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:14:02.231325 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-28 17:14:02.231342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:14:02.231367 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:14:02.231387 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-28 17:14:02.231417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:14:02.231435 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-28 17:14:02.231452 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:14:02.231479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:14:02.231496 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:14:02.231513 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-28 17:14:02.231543 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:14:02.231571 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-28 17:14:02.231589 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:14:02.231605 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:14:02.231623 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-28 17:14:02.231663 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:14:02.231681 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:14:02.231698 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:14:02.231714 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:14:02.231813 | orchestrator | 2025-05-28 17:14:02.231834 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-05-28 17:14:02.231851 | orchestrator | Wednesday 28 May 2025 17:12:06 +0000 (0:00:01.920) 0:00:44.327 ********* 2025-05-28 17:14:02.231869 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-28 17:14:02.231885 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-28 17:14:02.231902 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-28 17:14:02.231920 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-28 17:14:02.231936 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-28 17:14:02.231954 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-28 17:14:02.231971 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-28 17:14:02.231987 | orchestrator | 2025-05-28 17:14:02.232003 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-05-28 17:14:02.232019 | orchestrator | Wednesday 28 May 2025 17:12:08 +0000 (0:00:02.021) 0:00:46.348 ********* 2025-05-28 17:14:02.232035 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-28 17:14:02.232051 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-28 17:14:02.232067 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-28 17:14:02.232084 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-28 17:14:02.232101 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-28 17:14:02.232117 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-28 17:14:02.232134 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-28 17:14:02.232150 | orchestrator | 2025-05-28 17:14:02.232167 | orchestrator | TASK [common : Check common containers] **************************************** 2025-05-28 17:14:02.232184 | orchestrator | Wednesday 28 May 2025 17:12:12 +0000 (0:00:03.997) 0:00:50.346 ********* 2025-05-28 17:14:02.232201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-28 17:14:02.232231 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2022025-05-28 17:14:02 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:14:02.232251 | orchestrator | 4.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-28 17:14:02.232286 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-28 17:14:02.232316 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-28 17:14:02.232341 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:14:02.232359 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:14:02.232376 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:14:02.232394 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-28 17:14:02.232422 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-28 17:14:02.232440 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-28 17:14:02.232467 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:14:02.232490 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:14:02.232508 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:14:02.232525 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:14:02.232542 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:14:02.232559 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:14:02.232586 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:14:02.232613 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:14:02.232632 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:14:02.232687 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:14:02.232709 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:14:02.232727 | orchestrator | 2025-05-28 17:14:02.232768 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-05-28 17:14:02.232784 | orchestrator | Wednesday 28 May 2025 17:12:16 +0000 (0:00:03.576) 0:00:53.922 ********* 2025-05-28 17:14:02.232800 | orchestrator | changed: [testbed-manager] 2025-05-28 17:14:02.232818 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:14:02.232835 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:14:02.232851 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:14:02.232868 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:14:02.232884 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:14:02.232900 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:14:02.232918 | orchestrator | 2025-05-28 17:14:02.232935 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-05-28 17:14:02.232951 | orchestrator | Wednesday 28 May 2025 17:12:18 +0000 (0:00:01.721) 0:00:55.643 ********* 2025-05-28 17:14:02.232968 | orchestrator | changed: [testbed-manager] 2025-05-28 17:14:02.232985 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:14:02.233002 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:14:02.233019 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:14:02.233035 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:14:02.233052 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:14:02.233068 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:14:02.233084 | orchestrator | 2025-05-28 17:14:02.233101 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-28 17:14:02.233117 | orchestrator | Wednesday 28 May 2025 17:12:19 +0000 (0:00:01.217) 0:00:56.861 ********* 2025-05-28 17:14:02.233134 | orchestrator | 2025-05-28 17:14:02.233150 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-28 17:14:02.233168 | orchestrator | Wednesday 28 May 2025 17:12:19 +0000 (0:00:00.068) 0:00:56.930 ********* 2025-05-28 17:14:02.233184 | orchestrator | 2025-05-28 17:14:02.233213 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-28 17:14:02.233229 | orchestrator | Wednesday 28 May 2025 17:12:19 +0000 (0:00:00.081) 0:00:57.012 ********* 2025-05-28 17:14:02.233246 | orchestrator | 2025-05-28 17:14:02.233263 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-28 17:14:02.233279 | orchestrator | Wednesday 28 May 2025 17:12:19 +0000 (0:00:00.208) 0:00:57.221 ********* 2025-05-28 17:14:02.233295 | orchestrator | 2025-05-28 17:14:02.233311 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-28 17:14:02.233327 | orchestrator | Wednesday 28 May 2025 17:12:19 +0000 (0:00:00.061) 0:00:57.282 ********* 2025-05-28 17:14:02.233343 | orchestrator | 2025-05-28 17:14:02.233359 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-28 17:14:02.233377 | orchestrator | Wednesday 28 May 2025 17:12:19 +0000 (0:00:00.070) 0:00:57.352 ********* 2025-05-28 17:14:02.233392 | orchestrator | 2025-05-28 17:14:02.233410 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-28 17:14:02.233427 | orchestrator | Wednesday 28 May 2025 17:12:19 +0000 (0:00:00.057) 0:00:57.410 ********* 2025-05-28 17:14:02.233443 | orchestrator | 2025-05-28 17:14:02.233470 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-05-28 17:14:02.233486 | orchestrator | Wednesday 28 May 2025 17:12:20 +0000 (0:00:00.075) 0:00:57.485 ********* 2025-05-28 17:14:02.233503 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:14:02.233520 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:14:02.233538 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:14:02.233554 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:14:02.233571 | orchestrator | changed: [testbed-manager] 2025-05-28 17:14:02.233588 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:14:02.233603 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:14:02.233619 | orchestrator | 2025-05-28 17:14:02.233636 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-05-28 17:14:02.233652 | orchestrator | Wednesday 28 May 2025 17:13:06 +0000 (0:00:46.371) 0:01:43.857 ********* 2025-05-28 17:14:02.233669 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:14:02.233686 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:14:02.233703 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:14:02.233719 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:14:02.233756 | orchestrator | changed: [testbed-manager] 2025-05-28 17:14:02.233772 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:14:02.233789 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:14:02.233806 | orchestrator | 2025-05-28 17:14:02.233822 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-05-28 17:14:02.233839 | orchestrator | Wednesday 28 May 2025 17:13:48 +0000 (0:00:42.289) 0:02:26.147 ********* 2025-05-28 17:14:02.233856 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:14:02.233873 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:14:02.233890 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:14:02.233907 | orchestrator | ok: [testbed-manager] 2025-05-28 17:14:02.233924 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:14:02.233942 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:14:02.233958 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:14:02.233974 | orchestrator | 2025-05-28 17:14:02.233991 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-05-28 17:14:02.234007 | orchestrator | Wednesday 28 May 2025 17:13:50 +0000 (0:00:02.205) 0:02:28.352 ********* 2025-05-28 17:14:02.234065 | orchestrator | changed: [testbed-manager] 2025-05-28 17:14:02.234081 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:14:02.234098 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:14:02.234113 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:14:02.234138 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:14:02.234155 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:14:02.234171 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:14:02.234187 | orchestrator | 2025-05-28 17:14:02.234203 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 17:14:02.234234 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-05-28 17:14:02.234252 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-05-28 17:14:02.234269 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-05-28 17:14:02.234285 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-05-28 17:14:02.234302 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-05-28 17:14:02.234319 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-05-28 17:14:02.234336 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-05-28 17:14:02.234352 | orchestrator | 2025-05-28 17:14:02.234369 | orchestrator | 2025-05-28 17:14:02.234386 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 17:14:02.234401 | orchestrator | Wednesday 28 May 2025 17:14:00 +0000 (0:00:09.542) 0:02:37.894 ********* 2025-05-28 17:14:02.234418 | orchestrator | =============================================================================== 2025-05-28 17:14:02.234434 | orchestrator | common : Restart fluentd container ------------------------------------- 46.37s 2025-05-28 17:14:02.234450 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 42.29s 2025-05-28 17:14:02.234467 | orchestrator | common : Restart cron container ----------------------------------------- 9.54s 2025-05-28 17:14:02.234484 | orchestrator | common : Copying over config.json files for services -------------------- 6.60s 2025-05-28 17:14:02.234499 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 5.28s 2025-05-28 17:14:02.234516 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.01s 2025-05-28 17:14:02.234532 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.46s 2025-05-28 17:14:02.234548 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 4.00s 2025-05-28 17:14:02.234565 | orchestrator | common : Check common containers ---------------------------------------- 3.58s 2025-05-28 17:14:02.234581 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.34s 2025-05-28 17:14:02.234595 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.22s 2025-05-28 17:14:02.234608 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.99s 2025-05-28 17:14:02.234631 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.21s 2025-05-28 17:14:02.234645 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.02s 2025-05-28 17:14:02.234658 | orchestrator | common : Ensuring config directories have correct owner and permission --- 1.92s 2025-05-28 17:14:02.234672 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 1.92s 2025-05-28 17:14:02.234685 | orchestrator | common : Creating log volume -------------------------------------------- 1.72s 2025-05-28 17:14:02.234699 | orchestrator | common : Copying over /run subdirectories conf -------------------------- 1.71s 2025-05-28 17:14:02.234712 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.49s 2025-05-28 17:14:02.234725 | orchestrator | common : include_tasks -------------------------------------------------- 1.46s 2025-05-28 17:14:05.289358 | orchestrator | 2025-05-28 17:14:05 | INFO  | Task bfaa9d9f-71b3-4ac9-aa05-b0d00637635e is in state STARTED 2025-05-28 17:14:05.289575 | orchestrator | 2025-05-28 17:14:05 | INFO  | Task 9fd50cfe-7db8-4d21-aad0-4cf1fd7217dc is in state STARTED 2025-05-28 17:14:05.289608 | orchestrator | 2025-05-28 17:14:05 | INFO  | Task 6a852715-df3c-432e-bb79-deeb7e8c5f47 is in state STARTED 2025-05-28 17:14:05.289948 | orchestrator | 2025-05-28 17:14:05 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:14:05.290880 | orchestrator | 2025-05-28 17:14:05 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:14:05.291507 | orchestrator | 2025-05-28 17:14:05 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:14:05.291531 | orchestrator | 2025-05-28 17:14:05 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:14:08.323056 | orchestrator | 2025-05-28 17:14:08 | INFO  | Task bfaa9d9f-71b3-4ac9-aa05-b0d00637635e is in state STARTED 2025-05-28 17:14:08.323208 | orchestrator | 2025-05-28 17:14:08 | INFO  | Task 9fd50cfe-7db8-4d21-aad0-4cf1fd7217dc is in state STARTED 2025-05-28 17:14:08.323406 | orchestrator | 2025-05-28 17:14:08 | INFO  | Task 6a852715-df3c-432e-bb79-deeb7e8c5f47 is in state STARTED 2025-05-28 17:14:08.323674 | orchestrator | 2025-05-28 17:14:08 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:14:08.324451 | orchestrator | 2025-05-28 17:14:08 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:14:08.325107 | orchestrator | 2025-05-28 17:14:08 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:14:08.325132 | orchestrator | 2025-05-28 17:14:08 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:14:11.387466 | orchestrator | 2025-05-28 17:14:11 | INFO  | Task bfaa9d9f-71b3-4ac9-aa05-b0d00637635e is in state STARTED 2025-05-28 17:14:11.388388 | orchestrator | 2025-05-28 17:14:11 | INFO  | Task 9fd50cfe-7db8-4d21-aad0-4cf1fd7217dc is in state STARTED 2025-05-28 17:14:11.389508 | orchestrator | 2025-05-28 17:14:11 | INFO  | Task 6a852715-df3c-432e-bb79-deeb7e8c5f47 is in state STARTED 2025-05-28 17:14:11.390526 | orchestrator | 2025-05-28 17:14:11 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:14:11.395115 | orchestrator | 2025-05-28 17:14:11 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:14:11.395148 | orchestrator | 2025-05-28 17:14:11 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:14:11.395160 | orchestrator | 2025-05-28 17:14:11 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:14:14.469200 | orchestrator | 2025-05-28 17:14:14 | INFO  | Task bfaa9d9f-71b3-4ac9-aa05-b0d00637635e is in state STARTED 2025-05-28 17:14:14.469402 | orchestrator | 2025-05-28 17:14:14 | INFO  | Task 9fd50cfe-7db8-4d21-aad0-4cf1fd7217dc is in state STARTED 2025-05-28 17:14:14.469918 | orchestrator | 2025-05-28 17:14:14 | INFO  | Task 6a852715-df3c-432e-bb79-deeb7e8c5f47 is in state STARTED 2025-05-28 17:14:14.470638 | orchestrator | 2025-05-28 17:14:14 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:14:14.471434 | orchestrator | 2025-05-28 17:14:14 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:14:14.472106 | orchestrator | 2025-05-28 17:14:14 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:14:14.472137 | orchestrator | 2025-05-28 17:14:14 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:14:17.514295 | orchestrator | 2025-05-28 17:14:17 | INFO  | Task bfaa9d9f-71b3-4ac9-aa05-b0d00637635e is in state STARTED 2025-05-28 17:14:17.514634 | orchestrator | 2025-05-28 17:14:17 | INFO  | Task 9fd50cfe-7db8-4d21-aad0-4cf1fd7217dc is in state STARTED 2025-05-28 17:14:17.515233 | orchestrator | 2025-05-28 17:14:17 | INFO  | Task 6a852715-df3c-432e-bb79-deeb7e8c5f47 is in state STARTED 2025-05-28 17:14:17.516092 | orchestrator | 2025-05-28 17:14:17 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:14:17.517023 | orchestrator | 2025-05-28 17:14:17 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:14:17.518318 | orchestrator | 2025-05-28 17:14:17 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:14:17.518411 | orchestrator | 2025-05-28 17:14:17 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:14:20.568078 | orchestrator | 2025-05-28 17:14:20 | INFO  | Task bfaa9d9f-71b3-4ac9-aa05-b0d00637635e is in state STARTED 2025-05-28 17:14:20.568212 | orchestrator | 2025-05-28 17:14:20 | INFO  | Task 9fd50cfe-7db8-4d21-aad0-4cf1fd7217dc is in state SUCCESS 2025-05-28 17:14:20.568687 | orchestrator | 2025-05-28 17:14:20 | INFO  | Task 6b838928-2e76-4cfa-82bc-f03285ede25b is in state STARTED 2025-05-28 17:14:20.569449 | orchestrator | 2025-05-28 17:14:20 | INFO  | Task 6a852715-df3c-432e-bb79-deeb7e8c5f47 is in state STARTED 2025-05-28 17:14:20.570640 | orchestrator | 2025-05-28 17:14:20 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:14:20.571702 | orchestrator | 2025-05-28 17:14:20 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:14:20.571962 | orchestrator | 2025-05-28 17:14:20 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:14:20.571986 | orchestrator | 2025-05-28 17:14:20 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:14:23.606478 | orchestrator | 2025-05-28 17:14:23 | INFO  | Task bfaa9d9f-71b3-4ac9-aa05-b0d00637635e is in state STARTED 2025-05-28 17:14:23.610361 | orchestrator | 2025-05-28 17:14:23 | INFO  | Task 6b838928-2e76-4cfa-82bc-f03285ede25b is in state STARTED 2025-05-28 17:14:23.613321 | orchestrator | 2025-05-28 17:14:23 | INFO  | Task 6a852715-df3c-432e-bb79-deeb7e8c5f47 is in state STARTED 2025-05-28 17:14:23.616583 | orchestrator | 2025-05-28 17:14:23 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:14:23.617401 | orchestrator | 2025-05-28 17:14:23 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:14:23.619825 | orchestrator | 2025-05-28 17:14:23 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:14:23.619852 | orchestrator | 2025-05-28 17:14:23 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:14:26.662003 | orchestrator | 2025-05-28 17:14:26 | INFO  | Task bfaa9d9f-71b3-4ac9-aa05-b0d00637635e is in state STARTED 2025-05-28 17:14:26.662332 | orchestrator | 2025-05-28 17:14:26 | INFO  | Task 6b838928-2e76-4cfa-82bc-f03285ede25b is in state STARTED 2025-05-28 17:14:26.662357 | orchestrator | 2025-05-28 17:14:26 | INFO  | Task 6a852715-df3c-432e-bb79-deeb7e8c5f47 is in state STARTED 2025-05-28 17:14:26.662902 | orchestrator | 2025-05-28 17:14:26 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:14:26.663941 | orchestrator | 2025-05-28 17:14:26 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:14:26.664622 | orchestrator | 2025-05-28 17:14:26 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:14:26.664720 | orchestrator | 2025-05-28 17:14:26 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:14:29.696989 | orchestrator | 2025-05-28 17:14:29 | INFO  | Task bfaa9d9f-71b3-4ac9-aa05-b0d00637635e is in state STARTED 2025-05-28 17:14:29.698659 | orchestrator | 2025-05-28 17:14:29 | INFO  | Task 6b838928-2e76-4cfa-82bc-f03285ede25b is in state STARTED 2025-05-28 17:14:29.700593 | orchestrator | 2025-05-28 17:14:29 | INFO  | Task 6a852715-df3c-432e-bb79-deeb7e8c5f47 is in state STARTED 2025-05-28 17:14:29.701919 | orchestrator | 2025-05-28 17:14:29 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:14:29.702501 | orchestrator | 2025-05-28 17:14:29 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:14:29.705314 | orchestrator | 2025-05-28 17:14:29 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:14:29.705373 | orchestrator | 2025-05-28 17:14:29 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:14:32.747068 | orchestrator | 2025-05-28 17:14:32 | INFO  | Task bfaa9d9f-71b3-4ac9-aa05-b0d00637635e is in state STARTED 2025-05-28 17:14:32.749208 | orchestrator | 2025-05-28 17:14:32 | INFO  | Task 6b838928-2e76-4cfa-82bc-f03285ede25b is in state STARTED 2025-05-28 17:14:32.752221 | orchestrator | 2025-05-28 17:14:32 | INFO  | Task 6a852715-df3c-432e-bb79-deeb7e8c5f47 is in state STARTED 2025-05-28 17:14:32.754375 | orchestrator | 2025-05-28 17:14:32 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:14:32.756678 | orchestrator | 2025-05-28 17:14:32 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:14:32.758098 | orchestrator | 2025-05-28 17:14:32 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:14:32.758515 | orchestrator | 2025-05-28 17:14:32 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:14:35.814654 | orchestrator | 2025-05-28 17:14:35 | INFO  | Task bfaa9d9f-71b3-4ac9-aa05-b0d00637635e is in state STARTED 2025-05-28 17:14:35.816123 | orchestrator | 2025-05-28 17:14:35 | INFO  | Task 6b838928-2e76-4cfa-82bc-f03285ede25b is in state STARTED 2025-05-28 17:14:35.820365 | orchestrator | 2025-05-28 17:14:35 | INFO  | Task 6a852715-df3c-432e-bb79-deeb7e8c5f47 is in state STARTED 2025-05-28 17:14:35.823235 | orchestrator | 2025-05-28 17:14:35 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:14:35.824959 | orchestrator | 2025-05-28 17:14:35 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:14:35.827318 | orchestrator | 2025-05-28 17:14:35 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:14:35.828079 | orchestrator | 2025-05-28 17:14:35 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:14:38.878289 | orchestrator | 2025-05-28 17:14:38 | INFO  | Task bfaa9d9f-71b3-4ac9-aa05-b0d00637635e is in state STARTED 2025-05-28 17:14:38.878443 | orchestrator | 2025-05-28 17:14:38 | INFO  | Task 6b838928-2e76-4cfa-82bc-f03285ede25b is in state STARTED 2025-05-28 17:14:38.879733 | orchestrator | 2025-05-28 17:14:38 | INFO  | Task 6a852715-df3c-432e-bb79-deeb7e8c5f47 is in state STARTED 2025-05-28 17:14:38.884122 | orchestrator | 2025-05-28 17:14:38 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:14:38.884157 | orchestrator | 2025-05-28 17:14:38 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:14:38.884743 | orchestrator | 2025-05-28 17:14:38 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:14:38.884767 | orchestrator | 2025-05-28 17:14:38 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:14:41.930373 | orchestrator | 2025-05-28 17:14:41 | INFO  | Task bfaa9d9f-71b3-4ac9-aa05-b0d00637635e is in state STARTED 2025-05-28 17:14:41.930884 | orchestrator | 2025-05-28 17:14:41 | INFO  | Task 6b838928-2e76-4cfa-82bc-f03285ede25b is in state STARTED 2025-05-28 17:14:41.934766 | orchestrator | 2025-05-28 17:14:41 | INFO  | Task 6a852715-df3c-432e-bb79-deeb7e8c5f47 is in state SUCCESS 2025-05-28 17:14:41.935655 | orchestrator | 2025-05-28 17:14:41.935720 | orchestrator | 2025-05-28 17:14:41.935741 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-28 17:14:41.935755 | orchestrator | 2025-05-28 17:14:41.935766 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-28 17:14:41.935777 | orchestrator | Wednesday 28 May 2025 17:14:06 +0000 (0:00:00.358) 0:00:00.358 ********* 2025-05-28 17:14:41.935788 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:14:41.935800 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:14:41.935811 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:14:41.935822 | orchestrator | 2025-05-28 17:14:41.935832 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-28 17:14:41.935843 | orchestrator | Wednesday 28 May 2025 17:14:07 +0000 (0:00:00.526) 0:00:00.885 ********* 2025-05-28 17:14:41.935854 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-05-28 17:14:41.935865 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-05-28 17:14:41.935876 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-05-28 17:14:41.935887 | orchestrator | 2025-05-28 17:14:41.935897 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-05-28 17:14:41.935908 | orchestrator | 2025-05-28 17:14:41.935919 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-05-28 17:14:41.935930 | orchestrator | Wednesday 28 May 2025 17:14:08 +0000 (0:00:00.774) 0:00:01.659 ********* 2025-05-28 17:14:41.935940 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:14:41.935952 | orchestrator | 2025-05-28 17:14:41.935963 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-05-28 17:14:41.935973 | orchestrator | Wednesday 28 May 2025 17:14:09 +0000 (0:00:01.098) 0:00:02.758 ********* 2025-05-28 17:14:41.935984 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-05-28 17:14:41.935995 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-05-28 17:14:41.936005 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-05-28 17:14:41.936016 | orchestrator | 2025-05-28 17:14:41.936026 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-05-28 17:14:41.936037 | orchestrator | Wednesday 28 May 2025 17:14:10 +0000 (0:00:01.033) 0:00:03.792 ********* 2025-05-28 17:14:41.936047 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-05-28 17:14:41.936058 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-05-28 17:14:41.936068 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-05-28 17:14:41.936079 | orchestrator | 2025-05-28 17:14:41.936089 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-05-28 17:14:41.936100 | orchestrator | Wednesday 28 May 2025 17:14:13 +0000 (0:00:02.709) 0:00:06.502 ********* 2025-05-28 17:14:41.936110 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:14:41.936121 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:14:41.936131 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:14:41.936142 | orchestrator | 2025-05-28 17:14:41.936152 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-05-28 17:14:41.936163 | orchestrator | Wednesday 28 May 2025 17:14:14 +0000 (0:00:01.983) 0:00:08.485 ********* 2025-05-28 17:14:41.936173 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:14:41.936184 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:14:41.936194 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:14:41.936204 | orchestrator | 2025-05-28 17:14:41.936215 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 17:14:41.936243 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 17:14:41.936256 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 17:14:41.936269 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 17:14:41.936281 | orchestrator | 2025-05-28 17:14:41.936293 | orchestrator | 2025-05-28 17:14:41.936306 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 17:14:41.936335 | orchestrator | Wednesday 28 May 2025 17:14:18 +0000 (0:00:03.052) 0:00:11.537 ********* 2025-05-28 17:14:41.936348 | orchestrator | =============================================================================== 2025-05-28 17:14:41.936359 | orchestrator | memcached : Restart memcached container --------------------------------- 3.05s 2025-05-28 17:14:41.936371 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.71s 2025-05-28 17:14:41.936383 | orchestrator | memcached : Check memcached container ----------------------------------- 1.98s 2025-05-28 17:14:41.936395 | orchestrator | memcached : include_tasks ----------------------------------------------- 1.10s 2025-05-28 17:14:41.936407 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.03s 2025-05-28 17:14:41.936419 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.77s 2025-05-28 17:14:41.936431 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.53s 2025-05-28 17:14:41.936443 | orchestrator | 2025-05-28 17:14:41.936455 | orchestrator | 2025-05-28 17:14:41.936466 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-28 17:14:41.936478 | orchestrator | 2025-05-28 17:14:41.936490 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-28 17:14:41.936502 | orchestrator | Wednesday 28 May 2025 17:14:07 +0000 (0:00:00.555) 0:00:00.555 ********* 2025-05-28 17:14:41.936514 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:14:41.936526 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:14:41.936538 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:14:41.936550 | orchestrator | 2025-05-28 17:14:41.936562 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-28 17:14:41.936586 | orchestrator | Wednesday 28 May 2025 17:14:08 +0000 (0:00:00.654) 0:00:01.210 ********* 2025-05-28 17:14:41.936598 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-05-28 17:14:41.936611 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-05-28 17:14:41.936621 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-05-28 17:14:41.936632 | orchestrator | 2025-05-28 17:14:41.936643 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-05-28 17:14:41.936653 | orchestrator | 2025-05-28 17:14:41.936664 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-05-28 17:14:41.936674 | orchestrator | Wednesday 28 May 2025 17:14:09 +0000 (0:00:00.856) 0:00:02.066 ********* 2025-05-28 17:14:41.936722 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:14:41.936734 | orchestrator | 2025-05-28 17:14:41.936745 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-05-28 17:14:41.936755 | orchestrator | Wednesday 28 May 2025 17:14:10 +0000 (0:00:00.901) 0:00:02.968 ********* 2025-05-28 17:14:41.936769 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-28 17:14:41.936795 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-28 17:14:41.936807 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-28 17:14:41.936819 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-28 17:14:41.936831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-28 17:14:41.936852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-28 17:14:41.936864 | orchestrator | 2025-05-28 17:14:41.936875 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-05-28 17:14:41.936886 | orchestrator | Wednesday 28 May 2025 17:14:11 +0000 (0:00:01.657) 0:00:04.625 ********* 2025-05-28 17:14:41.936897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-28 17:14:41.936916 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-28 17:14:41.936927 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-28 17:14:41.936938 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-28 17:14:41.936964 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-28 17:14:41.936983 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-28 17:14:41.936994 | orchestrator | 2025-05-28 17:14:41.937005 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-05-28 17:14:41.937016 | orchestrator | Wednesday 28 May 2025 17:14:15 +0000 (0:00:03.318) 0:00:07.944 ********* 2025-05-28 17:14:41.937027 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-28 17:14:41.937045 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-28 17:14:41.937057 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-28 17:14:41.937068 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-28 17:14:41.937084 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-28 17:14:41.937096 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-28 17:14:41.937107 | orchestrator | 2025-05-28 17:14:41.937124 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-05-28 17:14:41.937135 | orchestrator | Wednesday 28 May 2025 17:14:18 +0000 (0:00:03.139) 0:00:11.083 ********* 2025-05-28 17:14:41.937146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-28 17:14:41.937164 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-28 17:14:41.937175 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-28 17:14:41.937187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-28 17:14:41.937203 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-28 17:14:41.937215 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-28 17:14:41.937226 | orchestrator | 2025-05-28 17:14:41.937237 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-05-28 17:14:41.937248 | orchestrator | Wednesday 28 May 2025 17:14:20 +0000 (0:00:01.637) 0:00:12.721 ********* 2025-05-28 17:14:41.937259 | orchestrator | 2025-05-28 17:14:41.937270 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-05-28 17:14:41.937286 | orchestrator | Wednesday 28 May 2025 17:14:20 +0000 (0:00:00.237) 0:00:12.958 ********* 2025-05-28 17:14:41.937304 | orchestrator | 2025-05-28 17:14:41.937315 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-05-28 17:14:41.937325 | orchestrator | Wednesday 28 May 2025 17:14:20 +0000 (0:00:00.118) 0:00:13.076 ********* 2025-05-28 17:14:41.937336 | orchestrator | 2025-05-28 17:14:41.937347 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-05-28 17:14:41.937357 | orchestrator | Wednesday 28 May 2025 17:14:20 +0000 (0:00:00.103) 0:00:13.180 ********* 2025-05-28 17:14:41.937368 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:14:41.937379 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:14:41.937389 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:14:41.937400 | orchestrator | 2025-05-28 17:14:41.937411 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-05-28 17:14:41.937422 | orchestrator | Wednesday 28 May 2025 17:14:28 +0000 (0:00:08.402) 0:00:21.582 ********* 2025-05-28 17:14:41.937432 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:14:41.937443 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:14:41.937454 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:14:41.937465 | orchestrator | 2025-05-28 17:14:41.937475 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 17:14:41.937486 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 17:14:41.937497 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 17:14:41.937508 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 17:14:41.937519 | orchestrator | 2025-05-28 17:14:41.937530 | orchestrator | 2025-05-28 17:14:41.937541 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 17:14:41.937551 | orchestrator | Wednesday 28 May 2025 17:14:39 +0000 (0:00:10.100) 0:00:31.683 ********* 2025-05-28 17:14:41.937562 | orchestrator | =============================================================================== 2025-05-28 17:14:41.937573 | orchestrator | redis : Restart redis-sentinel container ------------------------------- 10.10s 2025-05-28 17:14:41.937583 | orchestrator | redis : Restart redis container ----------------------------------------- 8.40s 2025-05-28 17:14:41.937594 | orchestrator | redis : Copying over default config.json files -------------------------- 3.32s 2025-05-28 17:14:41.937605 | orchestrator | redis : Copying over redis config files --------------------------------- 3.14s 2025-05-28 17:14:41.937615 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.66s 2025-05-28 17:14:41.937626 | orchestrator | redis : Check redis containers ------------------------------------------ 1.64s 2025-05-28 17:14:41.937637 | orchestrator | redis : include_tasks --------------------------------------------------- 0.90s 2025-05-28 17:14:41.937647 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.86s 2025-05-28 17:14:41.937658 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.65s 2025-05-28 17:14:41.937668 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.46s 2025-05-28 17:14:41.940285 | orchestrator | 2025-05-28 17:14:41 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:14:41.942474 | orchestrator | 2025-05-28 17:14:41 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:14:41.943409 | orchestrator | 2025-05-28 17:14:41 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:14:41.943431 | orchestrator | 2025-05-28 17:14:41 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:14:44.984930 | orchestrator | 2025-05-28 17:14:44 | INFO  | Task bfaa9d9f-71b3-4ac9-aa05-b0d00637635e is in state STARTED 2025-05-28 17:14:44.988256 | orchestrator | 2025-05-28 17:14:44 | INFO  | Task 6b838928-2e76-4cfa-82bc-f03285ede25b is in state STARTED 2025-05-28 17:14:44.988632 | orchestrator | 2025-05-28 17:14:44 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:14:44.989660 | orchestrator | 2025-05-28 17:14:44 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:14:44.990405 | orchestrator | 2025-05-28 17:14:44 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:14:44.990453 | orchestrator | 2025-05-28 17:14:44 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:14:48.031572 | orchestrator | 2025-05-28 17:14:48 | INFO  | Task bfaa9d9f-71b3-4ac9-aa05-b0d00637635e is in state STARTED 2025-05-28 17:14:48.032793 | orchestrator | 2025-05-28 17:14:48 | INFO  | Task 6b838928-2e76-4cfa-82bc-f03285ede25b is in state STARTED 2025-05-28 17:14:48.033551 | orchestrator | 2025-05-28 17:14:48 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:14:48.034227 | orchestrator | 2025-05-28 17:14:48 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:14:48.035117 | orchestrator | 2025-05-28 17:14:48 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:14:48.035141 | orchestrator | 2025-05-28 17:14:48 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:14:51.076845 | orchestrator | 2025-05-28 17:14:51 | INFO  | Task bfaa9d9f-71b3-4ac9-aa05-b0d00637635e is in state STARTED 2025-05-28 17:14:51.079988 | orchestrator | 2025-05-28 17:14:51 | INFO  | Task 6b838928-2e76-4cfa-82bc-f03285ede25b is in state STARTED 2025-05-28 17:14:51.081301 | orchestrator | 2025-05-28 17:14:51 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:14:51.082193 | orchestrator | 2025-05-28 17:14:51 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:14:51.083225 | orchestrator | 2025-05-28 17:14:51 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:14:51.083250 | orchestrator | 2025-05-28 17:14:51 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:14:54.127500 | orchestrator | 2025-05-28 17:14:54 | INFO  | Task bfaa9d9f-71b3-4ac9-aa05-b0d00637635e is in state STARTED 2025-05-28 17:14:54.127645 | orchestrator | 2025-05-28 17:14:54 | INFO  | Task 6b838928-2e76-4cfa-82bc-f03285ede25b is in state STARTED 2025-05-28 17:14:54.131358 | orchestrator | 2025-05-28 17:14:54 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:14:54.132128 | orchestrator | 2025-05-28 17:14:54 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:14:54.134559 | orchestrator | 2025-05-28 17:14:54 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:14:54.134626 | orchestrator | 2025-05-28 17:14:54 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:14:57.176834 | orchestrator | 2025-05-28 17:14:57 | INFO  | Task bfaa9d9f-71b3-4ac9-aa05-b0d00637635e is in state STARTED 2025-05-28 17:14:57.177261 | orchestrator | 2025-05-28 17:14:57 | INFO  | Task 6b838928-2e76-4cfa-82bc-f03285ede25b is in state STARTED 2025-05-28 17:14:57.179236 | orchestrator | 2025-05-28 17:14:57 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:14:57.181635 | orchestrator | 2025-05-28 17:14:57 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:14:57.183772 | orchestrator | 2025-05-28 17:14:57 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:14:57.183972 | orchestrator | 2025-05-28 17:14:57 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:15:00.230781 | orchestrator | 2025-05-28 17:15:00 | INFO  | Task bfaa9d9f-71b3-4ac9-aa05-b0d00637635e is in state STARTED 2025-05-28 17:15:00.231234 | orchestrator | 2025-05-28 17:15:00 | INFO  | Task 6b838928-2e76-4cfa-82bc-f03285ede25b is in state STARTED 2025-05-28 17:15:00.233553 | orchestrator | 2025-05-28 17:15:00 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:15:00.236124 | orchestrator | 2025-05-28 17:15:00 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:15:00.236156 | orchestrator | 2025-05-28 17:15:00 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:15:00.236168 | orchestrator | 2025-05-28 17:15:00 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:15:03.306262 | orchestrator | 2025-05-28 17:15:03 | INFO  | Task bfaa9d9f-71b3-4ac9-aa05-b0d00637635e is in state STARTED 2025-05-28 17:15:03.307399 | orchestrator | 2025-05-28 17:15:03 | INFO  | Task 6b838928-2e76-4cfa-82bc-f03285ede25b is in state STARTED 2025-05-28 17:15:03.310045 | orchestrator | 2025-05-28 17:15:03 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:15:03.313375 | orchestrator | 2025-05-28 17:15:03 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:15:03.316983 | orchestrator | 2025-05-28 17:15:03 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:15:03.317036 | orchestrator | 2025-05-28 17:15:03 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:15:06.347760 | orchestrator | 2025-05-28 17:15:06 | INFO  | Task bfaa9d9f-71b3-4ac9-aa05-b0d00637635e is in state STARTED 2025-05-28 17:15:06.347918 | orchestrator | 2025-05-28 17:15:06 | INFO  | Task 6b838928-2e76-4cfa-82bc-f03285ede25b is in state STARTED 2025-05-28 17:15:06.348442 | orchestrator | 2025-05-28 17:15:06 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:15:06.349246 | orchestrator | 2025-05-28 17:15:06 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:15:06.349901 | orchestrator | 2025-05-28 17:15:06 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:15:06.349923 | orchestrator | 2025-05-28 17:15:06 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:15:09.386298 | orchestrator | 2025-05-28 17:15:09 | INFO  | Task bfaa9d9f-71b3-4ac9-aa05-b0d00637635e is in state STARTED 2025-05-28 17:15:09.390176 | orchestrator | 2025-05-28 17:15:09 | INFO  | Task 6b838928-2e76-4cfa-82bc-f03285ede25b is in state STARTED 2025-05-28 17:15:09.391591 | orchestrator | 2025-05-28 17:15:09 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:15:09.393577 | orchestrator | 2025-05-28 17:15:09 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:15:09.394153 | orchestrator | 2025-05-28 17:15:09 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:15:09.394181 | orchestrator | 2025-05-28 17:15:09 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:15:12.441808 | orchestrator | 2025-05-28 17:15:12 | INFO  | Task bfaa9d9f-71b3-4ac9-aa05-b0d00637635e is in state STARTED 2025-05-28 17:15:12.442263 | orchestrator | 2025-05-28 17:15:12 | INFO  | Task 6b838928-2e76-4cfa-82bc-f03285ede25b is in state STARTED 2025-05-28 17:15:12.442964 | orchestrator | 2025-05-28 17:15:12 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:15:12.443763 | orchestrator | 2025-05-28 17:15:12 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:15:12.444834 | orchestrator | 2025-05-28 17:15:12 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:15:12.444863 | orchestrator | 2025-05-28 17:15:12 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:15:15.489563 | orchestrator | 2025-05-28 17:15:15 | INFO  | Task bfaa9d9f-71b3-4ac9-aa05-b0d00637635e is in state SUCCESS 2025-05-28 17:15:15.491032 | orchestrator | 2025-05-28 17:15:15.491082 | orchestrator | 2025-05-28 17:15:15.491095 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-28 17:15:15.491107 | orchestrator | 2025-05-28 17:15:15.491118 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-28 17:15:15.491130 | orchestrator | Wednesday 28 May 2025 17:14:07 +0000 (0:00:00.431) 0:00:00.431 ********* 2025-05-28 17:15:15.491141 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:15:15.491153 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:15:15.491164 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:15:15.491174 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:15:15.491184 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:15:15.491195 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:15:15.491205 | orchestrator | 2025-05-28 17:15:15.491216 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-28 17:15:15.491227 | orchestrator | Wednesday 28 May 2025 17:14:08 +0000 (0:00:01.007) 0:00:01.439 ********* 2025-05-28 17:15:15.491238 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-28 17:15:15.491248 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-28 17:15:15.491259 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-28 17:15:15.491270 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-28 17:15:15.491280 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-28 17:15:15.491291 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-28 17:15:15.491301 | orchestrator | 2025-05-28 17:15:15.491312 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-05-28 17:15:15.491322 | orchestrator | 2025-05-28 17:15:15.491343 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-05-28 17:15:15.491354 | orchestrator | Wednesday 28 May 2025 17:14:09 +0000 (0:00:01.038) 0:00:02.477 ********* 2025-05-28 17:15:15.491366 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 17:15:15.491379 | orchestrator | 2025-05-28 17:15:15.491390 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-05-28 17:15:15.491400 | orchestrator | Wednesday 28 May 2025 17:14:11 +0000 (0:00:01.731) 0:00:04.209 ********* 2025-05-28 17:15:15.491411 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-05-28 17:15:15.491423 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-05-28 17:15:15.491433 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-05-28 17:15:15.491444 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-05-28 17:15:15.491454 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-05-28 17:15:15.491465 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-05-28 17:15:15.491476 | orchestrator | 2025-05-28 17:15:15.491486 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-05-28 17:15:15.491497 | orchestrator | Wednesday 28 May 2025 17:14:13 +0000 (0:00:01.997) 0:00:06.206 ********* 2025-05-28 17:15:15.491507 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-05-28 17:15:15.491518 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-05-28 17:15:15.491529 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-05-28 17:15:15.491539 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-05-28 17:15:15.491571 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-05-28 17:15:15.491583 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-05-28 17:15:15.491594 | orchestrator | 2025-05-28 17:15:15.491605 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-05-28 17:15:15.491615 | orchestrator | Wednesday 28 May 2025 17:14:15 +0000 (0:00:01.961) 0:00:08.167 ********* 2025-05-28 17:15:15.491626 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-05-28 17:15:15.491636 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:15:15.491669 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-05-28 17:15:15.491680 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-05-28 17:15:15.491690 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:15:15.491701 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-05-28 17:15:15.491712 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:15:15.491722 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-05-28 17:15:15.491732 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:15:15.491743 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:15:15.491753 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-05-28 17:15:15.491763 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:15:15.491774 | orchestrator | 2025-05-28 17:15:15.491784 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-05-28 17:15:15.491795 | orchestrator | Wednesday 28 May 2025 17:14:16 +0000 (0:00:01.338) 0:00:09.506 ********* 2025-05-28 17:15:15.491806 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:15:15.491816 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:15:15.491826 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:15:15.491837 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:15:15.491847 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:15:15.491857 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:15:15.491868 | orchestrator | 2025-05-28 17:15:15.491878 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-05-28 17:15:15.491889 | orchestrator | Wednesday 28 May 2025 17:14:17 +0000 (0:00:00.692) 0:00:10.198 ********* 2025-05-28 17:15:15.491919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-28 17:15:15.491935 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-28 17:15:15.491952 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-28 17:15:15.491972 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-28 17:15:15.491984 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-28 17:15:15.491996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-28 17:15:15.492014 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-28 17:15:15.492031 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-28 17:15:15.492048 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-28 17:15:15.492059 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-28 17:15:15.492071 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-28 17:15:15.492087 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-28 17:15:15.492098 | orchestrator | 2025-05-28 17:15:15.492110 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-05-28 17:15:15.492121 | orchestrator | Wednesday 28 May 2025 17:14:19 +0000 (0:00:01.705) 0:00:11.904 ********* 2025-05-28 17:15:15.492132 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-28 17:15:15.492148 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-28 17:15:15.492166 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-28 17:15:15.492177 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-28 17:15:15.492188 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-28 17:15:15.492206 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-28 17:15:15.492218 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-28 17:15:15.492239 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-28 17:15:15.492251 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-28 17:15:15.492262 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-28 17:15:15.492273 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-28 17:15:15.492292 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-28 17:15:15.492304 | orchestrator | 2025-05-28 17:15:15.492314 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-05-28 17:15:15.492326 | orchestrator | Wednesday 28 May 2025 17:14:22 +0000 (0:00:02.977) 0:00:14.882 ********* 2025-05-28 17:15:15.492337 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:15:15.492348 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:15:15.492364 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:15:15.492375 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:15:15.492386 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:15:15.492397 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:15:15.492408 | orchestrator | 2025-05-28 17:15:15.492418 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-05-28 17:15:15.492429 | orchestrator | Wednesday 28 May 2025 17:14:23 +0000 (0:00:01.340) 0:00:16.222 ********* 2025-05-28 17:15:15.492441 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-28 17:15:15.492452 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-28 17:15:15.492469 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-28 17:15:15.492481 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-28 17:15:15.492499 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-28 17:15:15.492516 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-28 17:15:15.492532 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-28 17:15:15.492543 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-28 17:15:15.492554 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-28 17:15:15.492566 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-28 17:15:15.492583 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-28 17:15:15.492605 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-28 17:15:15.492617 | orchestrator | 2025-05-28 17:15:15.492632 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-28 17:15:15.492669 | orchestrator | Wednesday 28 May 2025 17:14:26 +0000 (0:00:02.317) 0:00:18.539 ********* 2025-05-28 17:15:15.492688 | orchestrator | 2025-05-28 17:15:15.492705 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-28 17:15:15.492723 | orchestrator | Wednesday 28 May 2025 17:14:26 +0000 (0:00:00.122) 0:00:18.662 ********* 2025-05-28 17:15:15.492741 | orchestrator | 2025-05-28 17:15:15.492760 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-28 17:15:15.492778 | orchestrator | Wednesday 28 May 2025 17:14:26 +0000 (0:00:00.133) 0:00:18.795 ********* 2025-05-28 17:15:15.492796 | orchestrator | 2025-05-28 17:15:15.492814 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-28 17:15:15.492832 | orchestrator | Wednesday 28 May 2025 17:14:26 +0000 (0:00:00.120) 0:00:18.915 ********* 2025-05-28 17:15:15.492844 | orchestrator | 2025-05-28 17:15:15.492854 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-28 17:15:15.492865 | orchestrator | Wednesday 28 May 2025 17:14:26 +0000 (0:00:00.122) 0:00:19.037 ********* 2025-05-28 17:15:15.492875 | orchestrator | 2025-05-28 17:15:15.492886 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-28 17:15:15.492896 | orchestrator | Wednesday 28 May 2025 17:14:26 +0000 (0:00:00.119) 0:00:19.157 ********* 2025-05-28 17:15:15.492906 | orchestrator | 2025-05-28 17:15:15.492917 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-05-28 17:15:15.492927 | orchestrator | Wednesday 28 May 2025 17:14:26 +0000 (0:00:00.239) 0:00:19.396 ********* 2025-05-28 17:15:15.492938 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:15:15.492948 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:15:15.492959 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:15:15.492969 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:15:15.492980 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:15:15.492990 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:15:15.493000 | orchestrator | 2025-05-28 17:15:15.493011 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-05-28 17:15:15.493021 | orchestrator | Wednesday 28 May 2025 17:14:38 +0000 (0:00:11.692) 0:00:31.089 ********* 2025-05-28 17:15:15.493032 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:15:15.493042 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:15:15.493053 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:15:15.493063 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:15:15.493073 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:15:15.493083 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:15:15.493094 | orchestrator | 2025-05-28 17:15:15.493104 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-05-28 17:15:15.493115 | orchestrator | Wednesday 28 May 2025 17:14:40 +0000 (0:00:02.089) 0:00:33.179 ********* 2025-05-28 17:15:15.493134 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:15:15.493145 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:15:15.493156 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:15:15.493167 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:15:15.493177 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:15:15.493188 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:15:15.493198 | orchestrator | 2025-05-28 17:15:15.493209 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-05-28 17:15:15.493219 | orchestrator | Wednesday 28 May 2025 17:14:51 +0000 (0:00:10.493) 0:00:43.672 ********* 2025-05-28 17:15:15.493230 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-05-28 17:15:15.493241 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-05-28 17:15:15.493251 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-05-28 17:15:15.493262 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-05-28 17:15:15.493272 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-05-28 17:15:15.493290 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-05-28 17:15:15.493301 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-05-28 17:15:15.493312 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-05-28 17:15:15.493322 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-05-28 17:15:15.493333 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-05-28 17:15:15.493344 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-05-28 17:15:15.493355 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-05-28 17:15:15.493365 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-28 17:15:15.493376 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-28 17:15:15.493386 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-28 17:15:15.493402 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-28 17:15:15.493413 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-28 17:15:15.493424 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-28 17:15:15.493435 | orchestrator | 2025-05-28 17:15:15.493445 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-05-28 17:15:15.493456 | orchestrator | Wednesday 28 May 2025 17:14:58 +0000 (0:00:07.809) 0:00:51.481 ********* 2025-05-28 17:15:15.493467 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-05-28 17:15:15.493477 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:15:15.493488 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-05-28 17:15:15.493499 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:15:15.493509 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-05-28 17:15:15.493520 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:15:15.493530 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-05-28 17:15:15.493547 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-05-28 17:15:15.493558 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-05-28 17:15:15.493569 | orchestrator | 2025-05-28 17:15:15.493580 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-05-28 17:15:15.493590 | orchestrator | Wednesday 28 May 2025 17:15:01 +0000 (0:00:02.088) 0:00:53.569 ********* 2025-05-28 17:15:15.493601 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-05-28 17:15:15.493611 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:15:15.493622 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-05-28 17:15:15.493633 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:15:15.493684 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-05-28 17:15:15.493695 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:15:15.493706 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-05-28 17:15:15.493717 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-05-28 17:15:15.493732 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-05-28 17:15:15.493752 | orchestrator | 2025-05-28 17:15:15.493772 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-05-28 17:15:15.493793 | orchestrator | Wednesday 28 May 2025 17:15:04 +0000 (0:00:03.828) 0:00:57.398 ********* 2025-05-28 17:15:15.493813 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:15:15.493833 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:15:15.493846 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:15:15.493856 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:15:15.493867 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:15:15.493877 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:15:15.493887 | orchestrator | 2025-05-28 17:15:15.493898 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 17:15:15.493909 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-28 17:15:15.493920 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-28 17:15:15.493931 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-28 17:15:15.493942 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-28 17:15:15.493952 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-28 17:15:15.493970 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-28 17:15:15.493981 | orchestrator | 2025-05-28 17:15:15.493991 | orchestrator | 2025-05-28 17:15:15.494002 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 17:15:15.494012 | orchestrator | Wednesday 28 May 2025 17:15:12 +0000 (0:00:07.355) 0:01:04.754 ********* 2025-05-28 17:15:15.494091 | orchestrator | =============================================================================== 2025-05-28 17:15:15.494102 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 17.85s 2025-05-28 17:15:15.494113 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 11.69s 2025-05-28 17:15:15.494123 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.81s 2025-05-28 17:15:15.494134 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.83s 2025-05-28 17:15:15.494144 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.98s 2025-05-28 17:15:15.494163 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.32s 2025-05-28 17:15:15.494174 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.09s 2025-05-28 17:15:15.494184 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.09s 2025-05-28 17:15:15.494195 | orchestrator | module-load : Load modules ---------------------------------------------- 2.00s 2025-05-28 17:15:15.494205 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.96s 2025-05-28 17:15:15.494221 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.73s 2025-05-28 17:15:15.494232 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.71s 2025-05-28 17:15:15.494243 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.34s 2025-05-28 17:15:15.494253 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.34s 2025-05-28 17:15:15.494264 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.04s 2025-05-28 17:15:15.494275 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.01s 2025-05-28 17:15:15.494285 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 0.86s 2025-05-28 17:15:15.494296 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.69s 2025-05-28 17:15:15.497613 | orchestrator | 2025-05-28 17:15:15 | INFO  | Task 6b838928-2e76-4cfa-82bc-f03285ede25b is in state STARTED 2025-05-28 17:15:15.498069 | orchestrator | 2025-05-28 17:15:15 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:15:15.500073 | orchestrator | 2025-05-28 17:15:15 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:15:15.501112 | orchestrator | 2025-05-28 17:15:15 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:15:15.504499 | orchestrator | 2025-05-28 17:15:15 | INFO  | Task 1ea0541c-b057-47d5-b02b-8a8ffc1acf6d is in state STARTED 2025-05-28 17:15:15.504536 | orchestrator | 2025-05-28 17:15:15 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:15:18.544395 | orchestrator | 2025-05-28 17:15:18 | INFO  | Task 6b838928-2e76-4cfa-82bc-f03285ede25b is in state STARTED 2025-05-28 17:15:18.548304 | orchestrator | 2025-05-28 17:15:18 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:15:18.552111 | orchestrator | 2025-05-28 17:15:18 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:15:18.553336 | orchestrator | 2025-05-28 17:15:18 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:15:18.554438 | orchestrator | 2025-05-28 17:15:18 | INFO  | Task 1ea0541c-b057-47d5-b02b-8a8ffc1acf6d is in state STARTED 2025-05-28 17:15:18.554462 | orchestrator | 2025-05-28 17:15:18 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:15:21.640256 | orchestrator | 2025-05-28 17:15:21 | INFO  | Task 6b838928-2e76-4cfa-82bc-f03285ede25b is in state STARTED 2025-05-28 17:15:21.640698 | orchestrator | 2025-05-28 17:15:21 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:15:21.641570 | orchestrator | 2025-05-28 17:15:21 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:15:21.642860 | orchestrator | 2025-05-28 17:15:21 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:15:21.645419 | orchestrator | 2025-05-28 17:15:21 | INFO  | Task 1ea0541c-b057-47d5-b02b-8a8ffc1acf6d is in state STARTED 2025-05-28 17:15:21.645443 | orchestrator | 2025-05-28 17:15:21 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:15:24.701260 | orchestrator | 2025-05-28 17:15:24 | INFO  | Task 6b838928-2e76-4cfa-82bc-f03285ede25b is in state STARTED 2025-05-28 17:15:24.703849 | orchestrator | 2025-05-28 17:15:24 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:15:24.705473 | orchestrator | 2025-05-28 17:15:24 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:15:24.710846 | orchestrator | 2025-05-28 17:15:24 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:15:24.712219 | orchestrator | 2025-05-28 17:15:24 | INFO  | Task 1ea0541c-b057-47d5-b02b-8a8ffc1acf6d is in state STARTED 2025-05-28 17:15:24.715208 | orchestrator | 2025-05-28 17:15:24 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:15:27.781268 | orchestrator | 2025-05-28 17:15:27 | INFO  | Task 6b838928-2e76-4cfa-82bc-f03285ede25b is in state STARTED 2025-05-28 17:15:27.781396 | orchestrator | 2025-05-28 17:15:27 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:15:27.781409 | orchestrator | 2025-05-28 17:15:27 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:15:27.781420 | orchestrator | 2025-05-28 17:15:27 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:15:27.781430 | orchestrator | 2025-05-28 17:15:27 | INFO  | Task 1ea0541c-b057-47d5-b02b-8a8ffc1acf6d is in state STARTED 2025-05-28 17:15:27.781441 | orchestrator | 2025-05-28 17:15:27 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:15:30.829133 | orchestrator | 2025-05-28 17:15:30 | INFO  | Task 6b838928-2e76-4cfa-82bc-f03285ede25b is in state STARTED 2025-05-28 17:15:30.829389 | orchestrator | 2025-05-28 17:15:30 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:15:30.830217 | orchestrator | 2025-05-28 17:15:30 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:15:30.833024 | orchestrator | 2025-05-28 17:15:30 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:15:30.833068 | orchestrator | 2025-05-28 17:15:30 | INFO  | Task 1ea0541c-b057-47d5-b02b-8a8ffc1acf6d is in state STARTED 2025-05-28 17:15:30.835056 | orchestrator | 2025-05-28 17:15:30 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:15:33.869780 | orchestrator | 2025-05-28 17:15:33 | INFO  | Task 6b838928-2e76-4cfa-82bc-f03285ede25b is in state STARTED 2025-05-28 17:15:33.871992 | orchestrator | 2025-05-28 17:15:33 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:15:33.872476 | orchestrator | 2025-05-28 17:15:33 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:15:33.873180 | orchestrator | 2025-05-28 17:15:33 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:15:33.874387 | orchestrator | 2025-05-28 17:15:33 | INFO  | Task 1ea0541c-b057-47d5-b02b-8a8ffc1acf6d is in state STARTED 2025-05-28 17:15:33.874425 | orchestrator | 2025-05-28 17:15:33 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:15:36.913419 | orchestrator | 2025-05-28 17:15:36 | INFO  | Task 6b838928-2e76-4cfa-82bc-f03285ede25b is in state STARTED 2025-05-28 17:15:36.914666 | orchestrator | 2025-05-28 17:15:36 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:15:36.915215 | orchestrator | 2025-05-28 17:15:36 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state STARTED 2025-05-28 17:15:36.917885 | orchestrator | 2025-05-28 17:15:36 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:15:36.919225 | orchestrator | 2025-05-28 17:15:36 | INFO  | Task 1ea0541c-b057-47d5-b02b-8a8ffc1acf6d is in state STARTED 2025-05-28 17:15:36.919300 | orchestrator | 2025-05-28 17:15:36 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:15:39.962353 | orchestrator | 2025-05-28 17:15:39 | INFO  | Task 6b838928-2e76-4cfa-82bc-f03285ede25b is in state STARTED 2025-05-28 17:15:39.964075 | orchestrator | 2025-05-28 17:15:39 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:15:39.967297 | orchestrator | 2025-05-28 17:15:39 | INFO  | Task 341cbb8b-4e86-49d3-ab32-990f3e5de3b2 is in state SUCCESS 2025-05-28 17:15:39.968825 | orchestrator | 2025-05-28 17:15:39.968867 | orchestrator | 2025-05-28 17:15:39.968880 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-05-28 17:15:39.968893 | orchestrator | 2025-05-28 17:15:39.968904 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-05-28 17:15:39.968916 | orchestrator | Wednesday 28 May 2025 17:11:23 +0000 (0:00:00.210) 0:00:00.210 ********* 2025-05-28 17:15:39.968927 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:15:39.968939 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:15:39.968950 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:15:39.968961 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:15:39.968984 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:15:39.968995 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:15:39.969006 | orchestrator | 2025-05-28 17:15:39.969017 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-05-28 17:15:39.969028 | orchestrator | Wednesday 28 May 2025 17:11:23 +0000 (0:00:00.689) 0:00:00.900 ********* 2025-05-28 17:15:39.969039 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:15:39.969051 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:15:39.969062 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:15:39.969109 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:15:39.969121 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:15:39.969132 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:15:39.969142 | orchestrator | 2025-05-28 17:15:39.969154 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-05-28 17:15:39.969173 | orchestrator | Wednesday 28 May 2025 17:11:24 +0000 (0:00:00.756) 0:00:01.656 ********* 2025-05-28 17:15:39.969193 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:15:39.969212 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:15:39.969228 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:15:39.969247 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:15:39.969261 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:15:39.969272 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:15:39.969283 | orchestrator | 2025-05-28 17:15:39.969293 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-05-28 17:15:39.969305 | orchestrator | Wednesday 28 May 2025 17:11:25 +0000 (0:00:00.941) 0:00:02.597 ********* 2025-05-28 17:15:39.969315 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:15:39.969326 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:15:39.969336 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:15:39.969347 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:15:39.969357 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:15:39.969368 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:15:39.969378 | orchestrator | 2025-05-28 17:15:39.969410 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-05-28 17:15:39.969423 | orchestrator | Wednesday 28 May 2025 17:11:28 +0000 (0:00:02.422) 0:00:05.020 ********* 2025-05-28 17:15:39.969435 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:15:39.969447 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:15:39.969458 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:15:39.969470 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:15:39.969482 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:15:39.969493 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:15:39.969505 | orchestrator | 2025-05-28 17:15:39.969518 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-05-28 17:15:39.969556 | orchestrator | Wednesday 28 May 2025 17:11:29 +0000 (0:00:01.090) 0:00:06.111 ********* 2025-05-28 17:15:39.969568 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:15:39.969580 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:15:39.969592 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:15:39.969604 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:15:39.969645 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:15:39.969658 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:15:39.969669 | orchestrator | 2025-05-28 17:15:39.969681 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-05-28 17:15:39.969693 | orchestrator | Wednesday 28 May 2025 17:11:30 +0000 (0:00:01.318) 0:00:07.429 ********* 2025-05-28 17:15:39.969705 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:15:39.969717 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:15:39.969729 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:15:39.969741 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:15:39.969753 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:15:39.969765 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:15:39.969777 | orchestrator | 2025-05-28 17:15:39.969788 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-05-28 17:15:39.969799 | orchestrator | Wednesday 28 May 2025 17:11:31 +0000 (0:00:00.804) 0:00:08.234 ********* 2025-05-28 17:15:39.969809 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:15:39.969820 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:15:39.969830 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:15:39.969841 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:15:39.969851 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:15:39.969862 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:15:39.969872 | orchestrator | 2025-05-28 17:15:39.969883 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-05-28 17:15:39.969894 | orchestrator | Wednesday 28 May 2025 17:11:31 +0000 (0:00:00.683) 0:00:08.917 ********* 2025-05-28 17:15:39.969904 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-28 17:15:39.969915 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-28 17:15:39.969926 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:15:39.969936 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-28 17:15:39.969947 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-28 17:15:39.969957 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:15:39.969968 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-28 17:15:39.969979 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-28 17:15:39.969989 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:15:39.970000 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-28 17:15:39.970073 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-28 17:15:39.970089 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:15:39.970100 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-28 17:15:39.970111 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-28 17:15:39.970121 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:15:39.970132 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-28 17:15:39.970143 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-28 17:15:39.970153 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:15:39.970164 | orchestrator | 2025-05-28 17:15:39.970174 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-05-28 17:15:39.970185 | orchestrator | Wednesday 28 May 2025 17:11:33 +0000 (0:00:01.152) 0:00:10.070 ********* 2025-05-28 17:15:39.970205 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:15:39.970216 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:15:39.970227 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:15:39.970237 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:15:39.970248 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:15:39.970258 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:15:39.970269 | orchestrator | 2025-05-28 17:15:39.970280 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-05-28 17:15:39.970292 | orchestrator | Wednesday 28 May 2025 17:11:34 +0000 (0:00:01.361) 0:00:11.432 ********* 2025-05-28 17:15:39.970303 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:15:39.970314 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:15:39.970325 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:15:39.970335 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:15:39.970346 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:15:39.970357 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:15:39.970367 | orchestrator | 2025-05-28 17:15:39.970378 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-05-28 17:15:39.970389 | orchestrator | Wednesday 28 May 2025 17:11:35 +0000 (0:00:00.782) 0:00:12.214 ********* 2025-05-28 17:15:39.970400 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:15:39.970410 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:15:39.970421 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:15:39.970476 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:15:39.970489 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:15:39.970499 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:15:39.970510 | orchestrator | 2025-05-28 17:15:39.970528 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-05-28 17:15:39.970539 | orchestrator | Wednesday 28 May 2025 17:11:41 +0000 (0:00:06.133) 0:00:18.347 ********* 2025-05-28 17:15:39.970549 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:15:39.970590 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:15:39.970601 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:15:39.970642 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:15:39.970654 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:15:39.970665 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:15:39.970675 | orchestrator | 2025-05-28 17:15:39.970686 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-05-28 17:15:39.970697 | orchestrator | Wednesday 28 May 2025 17:11:42 +0000 (0:00:01.263) 0:00:19.611 ********* 2025-05-28 17:15:39.970707 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:15:39.970718 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:15:39.970729 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:15:39.970739 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:15:39.970750 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:15:39.970760 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:15:39.970771 | orchestrator | 2025-05-28 17:15:39.970782 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-05-28 17:15:39.970795 | orchestrator | Wednesday 28 May 2025 17:11:44 +0000 (0:00:02.179) 0:00:21.790 ********* 2025-05-28 17:15:39.970805 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:15:39.970816 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:15:39.970826 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:15:39.970837 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:15:39.970848 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:15:39.970858 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:15:39.970869 | orchestrator | 2025-05-28 17:15:39.970879 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-05-28 17:15:39.970890 | orchestrator | Wednesday 28 May 2025 17:11:45 +0000 (0:00:01.052) 0:00:22.842 ********* 2025-05-28 17:15:39.970901 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2025-05-28 17:15:39.970920 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2025-05-28 17:15:39.970931 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:15:39.970941 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2025-05-28 17:15:39.970952 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2025-05-28 17:15:39.970963 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:15:39.970973 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2025-05-28 17:15:39.970984 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2025-05-28 17:15:39.970995 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:15:39.971006 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2025-05-28 17:15:39.971016 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2025-05-28 17:15:39.971027 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:15:39.971037 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2025-05-28 17:15:39.971048 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2025-05-28 17:15:39.971059 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:15:39.971069 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2025-05-28 17:15:39.971080 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2025-05-28 17:15:39.971091 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:15:39.971101 | orchestrator | 2025-05-28 17:15:39.971112 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-05-28 17:15:39.971132 | orchestrator | Wednesday 28 May 2025 17:11:47 +0000 (0:00:01.424) 0:00:24.267 ********* 2025-05-28 17:15:39.971143 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:15:39.971154 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:15:39.971164 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:15:39.971175 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:15:39.971186 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:15:39.971196 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:15:39.971207 | orchestrator | 2025-05-28 17:15:39.971218 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-05-28 17:15:39.971229 | orchestrator | 2025-05-28 17:15:39.971239 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-05-28 17:15:39.971250 | orchestrator | Wednesday 28 May 2025 17:11:49 +0000 (0:00:01.708) 0:00:25.976 ********* 2025-05-28 17:15:39.971261 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:15:39.971272 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:15:39.971282 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:15:39.971293 | orchestrator | 2025-05-28 17:15:39.971303 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-05-28 17:15:39.971314 | orchestrator | Wednesday 28 May 2025 17:11:50 +0000 (0:00:01.377) 0:00:27.353 ********* 2025-05-28 17:15:39.971325 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:15:39.971335 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:15:39.971346 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:15:39.971357 | orchestrator | 2025-05-28 17:15:39.971367 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-05-28 17:15:39.971378 | orchestrator | Wednesday 28 May 2025 17:11:51 +0000 (0:00:01.065) 0:00:28.419 ********* 2025-05-28 17:15:39.971389 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:15:39.971400 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:15:39.971410 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:15:39.971421 | orchestrator | 2025-05-28 17:15:39.971431 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-05-28 17:15:39.971442 | orchestrator | Wednesday 28 May 2025 17:11:52 +0000 (0:00:01.056) 0:00:29.475 ********* 2025-05-28 17:15:39.971453 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:15:39.971464 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:15:39.971474 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:15:39.971485 | orchestrator | 2025-05-28 17:15:39.971496 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-05-28 17:15:39.971506 | orchestrator | Wednesday 28 May 2025 17:11:53 +0000 (0:00:00.720) 0:00:30.195 ********* 2025-05-28 17:15:39.971523 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:15:39.971534 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:15:39.971545 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:15:39.971556 | orchestrator | 2025-05-28 17:15:39.971572 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-05-28 17:15:39.971583 | orchestrator | Wednesday 28 May 2025 17:11:53 +0000 (0:00:00.303) 0:00:30.499 ********* 2025-05-28 17:15:39.971594 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:15:39.971605 | orchestrator | 2025-05-28 17:15:39.971671 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-05-28 17:15:39.971683 | orchestrator | Wednesday 28 May 2025 17:11:54 +0000 (0:00:00.683) 0:00:31.182 ********* 2025-05-28 17:15:39.971693 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:15:39.971704 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:15:39.971715 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:15:39.971726 | orchestrator | 2025-05-28 17:15:39.971737 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-05-28 17:15:39.971748 | orchestrator | Wednesday 28 May 2025 17:11:57 +0000 (0:00:03.233) 0:00:34.416 ********* 2025-05-28 17:15:39.971758 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:15:39.971769 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:15:39.971780 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:15:39.971791 | orchestrator | 2025-05-28 17:15:39.971802 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-05-28 17:15:39.971813 | orchestrator | Wednesday 28 May 2025 17:11:58 +0000 (0:00:00.828) 0:00:35.245 ********* 2025-05-28 17:15:39.971823 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:15:39.971834 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:15:39.971845 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:15:39.971855 | orchestrator | 2025-05-28 17:15:39.971866 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-05-28 17:15:39.971877 | orchestrator | Wednesday 28 May 2025 17:11:59 +0000 (0:00:00.978) 0:00:36.223 ********* 2025-05-28 17:15:39.971888 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:15:39.971899 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:15:39.971909 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:15:39.971920 | orchestrator | 2025-05-28 17:15:39.971931 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-05-28 17:15:39.971942 | orchestrator | Wednesday 28 May 2025 17:12:01 +0000 (0:00:02.261) 0:00:38.485 ********* 2025-05-28 17:15:39.971953 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:15:39.971964 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:15:39.971975 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:15:39.971985 | orchestrator | 2025-05-28 17:15:39.971996 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-05-28 17:15:39.972007 | orchestrator | Wednesday 28 May 2025 17:12:01 +0000 (0:00:00.313) 0:00:38.799 ********* 2025-05-28 17:15:39.972018 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:15:39.972028 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:15:39.972039 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:15:39.972050 | orchestrator | 2025-05-28 17:15:39.972061 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-05-28 17:15:39.972072 | orchestrator | Wednesday 28 May 2025 17:12:02 +0000 (0:00:00.462) 0:00:39.261 ********* 2025-05-28 17:15:39.972083 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:15:39.972093 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:15:39.972104 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:15:39.972114 | orchestrator | 2025-05-28 17:15:39.972123 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-05-28 17:15:39.972133 | orchestrator | Wednesday 28 May 2025 17:12:04 +0000 (0:00:02.416) 0:00:41.678 ********* 2025-05-28 17:15:39.972148 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-05-28 17:15:39.972169 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-05-28 17:15:39.972179 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-05-28 17:15:39.972189 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-05-28 17:15:39.972199 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-05-28 17:15:39.972208 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-05-28 17:15:39.972218 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-05-28 17:15:39.972227 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-05-28 17:15:39.972237 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-05-28 17:15:39.972246 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-05-28 17:15:39.972256 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-05-28 17:15:39.972266 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-05-28 17:15:39.972280 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-05-28 17:15:39.972290 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:15:39.972300 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:15:39.972309 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:15:39.972319 | orchestrator | 2025-05-28 17:15:39.972329 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-05-28 17:15:39.972339 | orchestrator | Wednesday 28 May 2025 17:12:59 +0000 (0:00:55.102) 0:01:36.781 ********* 2025-05-28 17:15:39.972348 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:15:39.972358 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:15:39.972368 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:15:39.972377 | orchestrator | 2025-05-28 17:15:39.972387 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-05-28 17:15:39.972396 | orchestrator | Wednesday 28 May 2025 17:13:00 +0000 (0:00:00.422) 0:01:37.203 ********* 2025-05-28 17:15:39.972406 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:15:39.972415 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:15:39.972425 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:15:39.972434 | orchestrator | 2025-05-28 17:15:39.972444 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-05-28 17:15:39.972454 | orchestrator | Wednesday 28 May 2025 17:13:01 +0000 (0:00:01.037) 0:01:38.241 ********* 2025-05-28 17:15:39.972463 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:15:39.972473 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:15:39.972482 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:15:39.972492 | orchestrator | 2025-05-28 17:15:39.972501 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-05-28 17:15:39.972511 | orchestrator | Wednesday 28 May 2025 17:13:02 +0000 (0:00:01.215) 0:01:39.456 ********* 2025-05-28 17:15:39.972520 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:15:39.972530 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:15:39.972547 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:15:39.972556 | orchestrator | 2025-05-28 17:15:39.972566 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-05-28 17:15:39.972576 | orchestrator | Wednesday 28 May 2025 17:13:16 +0000 (0:00:13.550) 0:01:53.007 ********* 2025-05-28 17:15:39.972585 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:15:39.972595 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:15:39.972604 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:15:39.972628 | orchestrator | 2025-05-28 17:15:39.972637 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-05-28 17:15:39.972647 | orchestrator | Wednesday 28 May 2025 17:13:16 +0000 (0:00:00.803) 0:01:53.810 ********* 2025-05-28 17:15:39.972657 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:15:39.972666 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:15:39.972676 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:15:39.972685 | orchestrator | 2025-05-28 17:15:39.972694 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-05-28 17:15:39.972704 | orchestrator | Wednesday 28 May 2025 17:13:17 +0000 (0:00:00.818) 0:01:54.629 ********* 2025-05-28 17:15:39.972714 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:15:39.972723 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:15:39.972733 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:15:39.972742 | orchestrator | 2025-05-28 17:15:39.972752 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-05-28 17:15:39.972762 | orchestrator | Wednesday 28 May 2025 17:13:18 +0000 (0:00:00.697) 0:01:55.326 ********* 2025-05-28 17:15:39.972771 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:15:39.972781 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:15:39.972790 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:15:39.972800 | orchestrator | 2025-05-28 17:15:39.972814 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-05-28 17:15:39.972824 | orchestrator | Wednesday 28 May 2025 17:13:19 +0000 (0:00:01.157) 0:01:56.484 ********* 2025-05-28 17:15:39.972834 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:15:39.972843 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:15:39.972853 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:15:39.972862 | orchestrator | 2025-05-28 17:15:39.972872 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-05-28 17:15:39.972882 | orchestrator | Wednesday 28 May 2025 17:13:19 +0000 (0:00:00.357) 0:01:56.841 ********* 2025-05-28 17:15:39.972892 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:15:39.972901 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:15:39.972911 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:15:39.972920 | orchestrator | 2025-05-28 17:15:39.972934 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-05-28 17:15:39.972950 | orchestrator | Wednesday 28 May 2025 17:13:20 +0000 (0:00:00.769) 0:01:57.610 ********* 2025-05-28 17:15:39.972966 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:15:39.972982 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:15:39.972997 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:15:39.973012 | orchestrator | 2025-05-28 17:15:39.973029 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-05-28 17:15:39.973046 | orchestrator | Wednesday 28 May 2025 17:13:21 +0000 (0:00:00.727) 0:01:58.337 ********* 2025-05-28 17:15:39.973062 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:15:39.973077 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:15:39.973086 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:15:39.973096 | orchestrator | 2025-05-28 17:15:39.973106 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-05-28 17:15:39.973115 | orchestrator | Wednesday 28 May 2025 17:13:22 +0000 (0:00:01.222) 0:01:59.560 ********* 2025-05-28 17:15:39.973125 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:15:39.973134 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:15:39.973144 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:15:39.973153 | orchestrator | 2025-05-28 17:15:39.973171 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-05-28 17:15:39.973180 | orchestrator | Wednesday 28 May 2025 17:13:23 +0000 (0:00:00.865) 0:02:00.426 ********* 2025-05-28 17:15:39.973190 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:15:39.973199 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:15:39.973209 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:15:39.973218 | orchestrator | 2025-05-28 17:15:39.973228 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-05-28 17:15:39.973243 | orchestrator | Wednesday 28 May 2025 17:13:23 +0000 (0:00:00.313) 0:02:00.740 ********* 2025-05-28 17:15:39.973253 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:15:39.973262 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:15:39.973272 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:15:39.973281 | orchestrator | 2025-05-28 17:15:39.973290 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-05-28 17:15:39.973300 | orchestrator | Wednesday 28 May 2025 17:13:24 +0000 (0:00:00.298) 0:02:01.038 ********* 2025-05-28 17:15:39.973309 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:15:39.973319 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:15:39.973328 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:15:39.973338 | orchestrator | 2025-05-28 17:15:39.973347 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-05-28 17:15:39.973357 | orchestrator | Wednesday 28 May 2025 17:13:24 +0000 (0:00:00.844) 0:02:01.883 ********* 2025-05-28 17:15:39.973366 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:15:39.973376 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:15:39.973385 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:15:39.973394 | orchestrator | 2025-05-28 17:15:39.973404 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-05-28 17:15:39.973414 | orchestrator | Wednesday 28 May 2025 17:13:25 +0000 (0:00:00.644) 0:02:02.527 ********* 2025-05-28 17:15:39.973423 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-05-28 17:15:39.973433 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-05-28 17:15:39.973442 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-05-28 17:15:39.973451 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-05-28 17:15:39.973461 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-05-28 17:15:39.973470 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-05-28 17:15:39.973480 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-05-28 17:15:39.973490 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-05-28 17:15:39.973503 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-05-28 17:15:39.973519 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-05-28 17:15:39.973535 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-05-28 17:15:39.973550 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-05-28 17:15:39.973565 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-05-28 17:15:39.973580 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-05-28 17:15:39.973596 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-05-28 17:15:39.973764 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-05-28 17:15:39.973808 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-05-28 17:15:39.973818 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-05-28 17:15:39.973828 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-05-28 17:15:39.973838 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-05-28 17:15:39.973847 | orchestrator | 2025-05-28 17:15:39.973857 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-05-28 17:15:39.973867 | orchestrator | 2025-05-28 17:15:39.973876 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-05-28 17:15:39.973886 | orchestrator | Wednesday 28 May 2025 17:13:28 +0000 (0:00:03.106) 0:02:05.634 ********* 2025-05-28 17:15:39.973895 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:15:39.973905 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:15:39.973914 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:15:39.973921 | orchestrator | 2025-05-28 17:15:39.973929 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-05-28 17:15:39.973937 | orchestrator | Wednesday 28 May 2025 17:13:29 +0000 (0:00:00.524) 0:02:06.159 ********* 2025-05-28 17:15:39.973945 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:15:39.973953 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:15:39.973960 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:15:39.973968 | orchestrator | 2025-05-28 17:15:39.973976 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-05-28 17:15:39.973983 | orchestrator | Wednesday 28 May 2025 17:13:29 +0000 (0:00:00.675) 0:02:06.834 ********* 2025-05-28 17:15:39.973991 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:15:39.973999 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:15:39.974006 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:15:39.974014 | orchestrator | 2025-05-28 17:15:39.974049 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-05-28 17:15:39.974057 | orchestrator | Wednesday 28 May 2025 17:13:30 +0000 (0:00:00.316) 0:02:07.151 ********* 2025-05-28 17:15:39.974065 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 17:15:39.974073 | orchestrator | 2025-05-28 17:15:39.974085 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-05-28 17:15:39.974093 | orchestrator | Wednesday 28 May 2025 17:13:30 +0000 (0:00:00.644) 0:02:07.796 ********* 2025-05-28 17:15:39.974101 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:15:39.974109 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:15:39.974117 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:15:39.974124 | orchestrator | 2025-05-28 17:15:39.974132 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-05-28 17:15:39.974140 | orchestrator | Wednesday 28 May 2025 17:13:31 +0000 (0:00:00.289) 0:02:08.085 ********* 2025-05-28 17:15:39.974148 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:15:39.974156 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:15:39.974164 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:15:39.974171 | orchestrator | 2025-05-28 17:15:39.974179 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-05-28 17:15:39.974187 | orchestrator | Wednesday 28 May 2025 17:13:31 +0000 (0:00:00.295) 0:02:08.380 ********* 2025-05-28 17:15:39.974195 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:15:39.974202 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:15:39.974210 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:15:39.974218 | orchestrator | 2025-05-28 17:15:39.974226 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-05-28 17:15:39.974234 | orchestrator | Wednesday 28 May 2025 17:13:31 +0000 (0:00:00.274) 0:02:08.655 ********* 2025-05-28 17:15:39.974242 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:15:39.974249 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:15:39.974269 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:15:39.974277 | orchestrator | 2025-05-28 17:15:39.974285 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-05-28 17:15:39.974293 | orchestrator | Wednesday 28 May 2025 17:13:33 +0000 (0:00:01.444) 0:02:10.100 ********* 2025-05-28 17:15:39.974300 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:15:39.974308 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:15:39.974316 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:15:39.974323 | orchestrator | 2025-05-28 17:15:39.974331 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-05-28 17:15:39.974339 | orchestrator | 2025-05-28 17:15:39.974347 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-05-28 17:15:39.974355 | orchestrator | Wednesday 28 May 2025 17:13:43 +0000 (0:00:10.024) 0:02:20.125 ********* 2025-05-28 17:15:39.974362 | orchestrator | ok: [testbed-manager] 2025-05-28 17:15:39.974370 | orchestrator | 2025-05-28 17:15:39.974378 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-05-28 17:15:39.974386 | orchestrator | Wednesday 28 May 2025 17:13:43 +0000 (0:00:00.739) 0:02:20.865 ********* 2025-05-28 17:15:39.974393 | orchestrator | changed: [testbed-manager] 2025-05-28 17:15:39.974401 | orchestrator | 2025-05-28 17:15:39.974409 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-05-28 17:15:39.974417 | orchestrator | Wednesday 28 May 2025 17:13:44 +0000 (0:00:00.385) 0:02:21.250 ********* 2025-05-28 17:15:39.974425 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-05-28 17:15:39.974432 | orchestrator | 2025-05-28 17:15:39.974440 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-05-28 17:15:39.974448 | orchestrator | Wednesday 28 May 2025 17:13:45 +0000 (0:00:01.014) 0:02:22.264 ********* 2025-05-28 17:15:39.974456 | orchestrator | changed: [testbed-manager] 2025-05-28 17:15:39.974463 | orchestrator | 2025-05-28 17:15:39.974471 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-05-28 17:15:39.974479 | orchestrator | Wednesday 28 May 2025 17:13:46 +0000 (0:00:00.806) 0:02:23.070 ********* 2025-05-28 17:15:39.974492 | orchestrator | changed: [testbed-manager] 2025-05-28 17:15:39.974500 | orchestrator | 2025-05-28 17:15:39.974508 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-05-28 17:15:39.974516 | orchestrator | Wednesday 28 May 2025 17:13:46 +0000 (0:00:00.568) 0:02:23.639 ********* 2025-05-28 17:15:39.974524 | orchestrator | changed: [testbed-manager -> localhost] 2025-05-28 17:15:39.974532 | orchestrator | 2025-05-28 17:15:39.974540 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-05-28 17:15:39.974548 | orchestrator | Wednesday 28 May 2025 17:13:48 +0000 (0:00:01.525) 0:02:25.164 ********* 2025-05-28 17:15:39.974556 | orchestrator | changed: [testbed-manager -> localhost] 2025-05-28 17:15:39.974564 | orchestrator | 2025-05-28 17:15:39.974571 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-05-28 17:15:39.974579 | orchestrator | Wednesday 28 May 2025 17:13:49 +0000 (0:00:00.882) 0:02:26.047 ********* 2025-05-28 17:15:39.974587 | orchestrator | changed: [testbed-manager] 2025-05-28 17:15:39.974595 | orchestrator | 2025-05-28 17:15:39.974603 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-05-28 17:15:39.974641 | orchestrator | Wednesday 28 May 2025 17:13:49 +0000 (0:00:00.439) 0:02:26.487 ********* 2025-05-28 17:15:39.974650 | orchestrator | changed: [testbed-manager] 2025-05-28 17:15:39.974658 | orchestrator | 2025-05-28 17:15:39.974666 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-05-28 17:15:39.974682 | orchestrator | 2025-05-28 17:15:39.974690 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-05-28 17:15:39.974698 | orchestrator | Wednesday 28 May 2025 17:13:49 +0000 (0:00:00.442) 0:02:26.929 ********* 2025-05-28 17:15:39.974706 | orchestrator | ok: [testbed-manager] 2025-05-28 17:15:39.974714 | orchestrator | 2025-05-28 17:15:39.974721 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-05-28 17:15:39.974735 | orchestrator | Wednesday 28 May 2025 17:13:50 +0000 (0:00:00.154) 0:02:27.083 ********* 2025-05-28 17:15:39.974743 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-05-28 17:15:39.974751 | orchestrator | 2025-05-28 17:15:39.974759 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-05-28 17:15:39.974766 | orchestrator | Wednesday 28 May 2025 17:13:50 +0000 (0:00:00.497) 0:02:27.581 ********* 2025-05-28 17:15:39.974774 | orchestrator | ok: [testbed-manager] 2025-05-28 17:15:39.974782 | orchestrator | 2025-05-28 17:15:39.974790 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-05-28 17:15:39.974798 | orchestrator | Wednesday 28 May 2025 17:13:51 +0000 (0:00:00.839) 0:02:28.420 ********* 2025-05-28 17:15:39.974806 | orchestrator | ok: [testbed-manager] 2025-05-28 17:15:39.974814 | orchestrator | 2025-05-28 17:15:39.975439 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-05-28 17:15:39.975455 | orchestrator | Wednesday 28 May 2025 17:13:53 +0000 (0:00:01.634) 0:02:30.055 ********* 2025-05-28 17:15:39.975463 | orchestrator | changed: [testbed-manager] 2025-05-28 17:15:39.975472 | orchestrator | 2025-05-28 17:15:39.975480 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-05-28 17:15:39.975487 | orchestrator | Wednesday 28 May 2025 17:13:53 +0000 (0:00:00.788) 0:02:30.844 ********* 2025-05-28 17:15:39.975495 | orchestrator | ok: [testbed-manager] 2025-05-28 17:15:39.975503 | orchestrator | 2025-05-28 17:15:39.975511 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-05-28 17:15:39.975519 | orchestrator | Wednesday 28 May 2025 17:13:54 +0000 (0:00:00.437) 0:02:31.281 ********* 2025-05-28 17:15:39.975527 | orchestrator | changed: [testbed-manager] 2025-05-28 17:15:39.975534 | orchestrator | 2025-05-28 17:15:39.975542 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-05-28 17:15:39.975550 | orchestrator | Wednesday 28 May 2025 17:14:00 +0000 (0:00:06.677) 0:02:37.959 ********* 2025-05-28 17:15:39.975558 | orchestrator | changed: [testbed-manager] 2025-05-28 17:15:39.975566 | orchestrator | 2025-05-28 17:15:39.975574 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-05-28 17:15:39.975582 | orchestrator | Wednesday 28 May 2025 17:14:12 +0000 (0:00:11.181) 0:02:49.140 ********* 2025-05-28 17:15:39.975590 | orchestrator | ok: [testbed-manager] 2025-05-28 17:15:39.975597 | orchestrator | 2025-05-28 17:15:39.975605 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-05-28 17:15:39.975629 | orchestrator | 2025-05-28 17:15:39.975637 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-05-28 17:15:39.975645 | orchestrator | Wednesday 28 May 2025 17:14:12 +0000 (0:00:00.560) 0:02:49.701 ********* 2025-05-28 17:15:39.975653 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:15:39.975660 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:15:39.975668 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:15:39.975676 | orchestrator | 2025-05-28 17:15:39.975684 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-05-28 17:15:39.975692 | orchestrator | Wednesday 28 May 2025 17:14:13 +0000 (0:00:00.422) 0:02:50.123 ********* 2025-05-28 17:15:39.975700 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:15:39.975708 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:15:39.975716 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:15:39.975724 | orchestrator | 2025-05-28 17:15:39.975731 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-05-28 17:15:39.975739 | orchestrator | Wednesday 28 May 2025 17:14:13 +0000 (0:00:00.270) 0:02:50.394 ********* 2025-05-28 17:15:39.975748 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:15:39.975756 | orchestrator | 2025-05-28 17:15:39.975763 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-05-28 17:15:39.975771 | orchestrator | Wednesday 28 May 2025 17:14:13 +0000 (0:00:00.504) 0:02:50.898 ********* 2025-05-28 17:15:39.975788 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-05-28 17:15:39.975796 | orchestrator | 2025-05-28 17:15:39.975803 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-05-28 17:15:39.975811 | orchestrator | Wednesday 28 May 2025 17:14:14 +0000 (0:00:00.792) 0:02:51.691 ********* 2025-05-28 17:15:39.975828 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-28 17:15:39.975836 | orchestrator | 2025-05-28 17:15:39.975844 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-05-28 17:15:39.975852 | orchestrator | Wednesday 28 May 2025 17:14:15 +0000 (0:00:00.726) 0:02:52.417 ********* 2025-05-28 17:15:39.975859 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:15:39.975867 | orchestrator | 2025-05-28 17:15:39.975875 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-05-28 17:15:39.975883 | orchestrator | Wednesday 28 May 2025 17:14:16 +0000 (0:00:00.621) 0:02:53.039 ********* 2025-05-28 17:15:39.975891 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-28 17:15:39.975899 | orchestrator | 2025-05-28 17:15:39.975906 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-05-28 17:15:39.975914 | orchestrator | Wednesday 28 May 2025 17:14:17 +0000 (0:00:00.973) 0:02:54.013 ********* 2025-05-28 17:15:39.975922 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:15:39.975930 | orchestrator | 2025-05-28 17:15:39.975937 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-05-28 17:15:39.975945 | orchestrator | Wednesday 28 May 2025 17:14:17 +0000 (0:00:00.168) 0:02:54.181 ********* 2025-05-28 17:15:39.975953 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:15:39.975961 | orchestrator | 2025-05-28 17:15:39.975969 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-05-28 17:15:39.975981 | orchestrator | Wednesday 28 May 2025 17:14:17 +0000 (0:00:00.209) 0:02:54.391 ********* 2025-05-28 17:15:39.975989 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:15:39.975997 | orchestrator | 2025-05-28 17:15:39.976005 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-05-28 17:15:39.976013 | orchestrator | Wednesday 28 May 2025 17:14:17 +0000 (0:00:00.173) 0:02:54.565 ********* 2025-05-28 17:15:39.976021 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:15:39.976029 | orchestrator | 2025-05-28 17:15:39.976036 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-05-28 17:15:39.976044 | orchestrator | Wednesday 28 May 2025 17:14:17 +0000 (0:00:00.197) 0:02:54.763 ********* 2025-05-28 17:15:39.976052 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-05-28 17:15:39.976060 | orchestrator | 2025-05-28 17:15:39.976068 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-05-28 17:15:39.976076 | orchestrator | Wednesday 28 May 2025 17:14:22 +0000 (0:00:04.567) 0:02:59.330 ********* 2025-05-28 17:15:39.976083 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2025-05-28 17:15:39.976091 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2025-05-28 17:15:39.976099 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2025-05-28 17:15:39.976107 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2025-05-28 17:15:39.976115 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2025-05-28 17:15:39.976123 | orchestrator | 2025-05-28 17:15:39.976131 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-05-28 17:15:39.976139 | orchestrator | Wednesday 28 May 2025 17:15:10 +0000 (0:00:48.406) 0:03:47.737 ********* 2025-05-28 17:15:39.976146 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-28 17:15:39.976154 | orchestrator | 2025-05-28 17:15:39.976162 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-05-28 17:15:39.976170 | orchestrator | Wednesday 28 May 2025 17:15:11 +0000 (0:00:01.119) 0:03:48.856 ********* 2025-05-28 17:15:39.976183 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-05-28 17:15:39.976191 | orchestrator | 2025-05-28 17:15:39.976199 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-05-28 17:15:39.976207 | orchestrator | Wednesday 28 May 2025 17:15:13 +0000 (0:00:01.572) 0:03:50.428 ********* 2025-05-28 17:15:39.976214 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-05-28 17:15:39.976222 | orchestrator | 2025-05-28 17:15:39.976230 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-05-28 17:15:39.976238 | orchestrator | Wednesday 28 May 2025 17:15:14 +0000 (0:00:01.281) 0:03:51.710 ********* 2025-05-28 17:15:39.976246 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:15:39.976254 | orchestrator | 2025-05-28 17:15:39.976261 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-05-28 17:15:39.976269 | orchestrator | Wednesday 28 May 2025 17:15:14 +0000 (0:00:00.246) 0:03:51.956 ********* 2025-05-28 17:15:39.976277 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2025-05-28 17:15:39.976285 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2025-05-28 17:15:39.976293 | orchestrator | 2025-05-28 17:15:39.976301 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-05-28 17:15:39.976309 | orchestrator | Wednesday 28 May 2025 17:15:17 +0000 (0:00:02.812) 0:03:54.769 ********* 2025-05-28 17:15:39.976317 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:15:39.976325 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:15:39.976333 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:15:39.976341 | orchestrator | 2025-05-28 17:15:39.976349 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-05-28 17:15:39.976357 | orchestrator | Wednesday 28 May 2025 17:15:18 +0000 (0:00:00.328) 0:03:55.097 ********* 2025-05-28 17:15:39.976364 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:15:39.976372 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:15:39.976380 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:15:39.976388 | orchestrator | 2025-05-28 17:15:39.976395 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-05-28 17:15:39.976403 | orchestrator | 2025-05-28 17:15:39.976411 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2025-05-28 17:15:39.976419 | orchestrator | Wednesday 28 May 2025 17:15:19 +0000 (0:00:00.910) 0:03:56.007 ********* 2025-05-28 17:15:39.976427 | orchestrator | ok: [testbed-manager] 2025-05-28 17:15:39.976435 | orchestrator | 2025-05-28 17:15:39.976448 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2025-05-28 17:15:39.976456 | orchestrator | Wednesday 28 May 2025 17:15:19 +0000 (0:00:00.136) 0:03:56.144 ********* 2025-05-28 17:15:39.976463 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-05-28 17:15:39.976471 | orchestrator | 2025-05-28 17:15:39.976479 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2025-05-28 17:15:39.976487 | orchestrator | Wednesday 28 May 2025 17:15:19 +0000 (0:00:00.562) 0:03:56.707 ********* 2025-05-28 17:15:39.976495 | orchestrator | changed: [testbed-manager] 2025-05-28 17:15:39.976502 | orchestrator | 2025-05-28 17:15:39.976510 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-05-28 17:15:39.976518 | orchestrator | 2025-05-28 17:15:39.976526 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-05-28 17:15:39.976534 | orchestrator | Wednesday 28 May 2025 17:15:26 +0000 (0:00:06.456) 0:04:03.164 ********* 2025-05-28 17:15:39.976542 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:15:39.976549 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:15:39.976557 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:15:39.976565 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:15:39.976573 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:15:39.976581 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:15:39.976588 | orchestrator | 2025-05-28 17:15:39.976596 | orchestrator | TASK [Manage labels] *********************************************************** 2025-05-28 17:15:39.976664 | orchestrator | Wednesday 28 May 2025 17:15:26 +0000 (0:00:00.685) 0:04:03.850 ********* 2025-05-28 17:15:39.976674 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-05-28 17:15:39.976682 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-05-28 17:15:39.976690 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-05-28 17:15:39.976698 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-05-28 17:15:39.976705 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-05-28 17:15:39.976713 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-05-28 17:15:39.976721 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-05-28 17:15:39.976729 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-05-28 17:15:39.976736 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-05-28 17:15:39.976744 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-05-28 17:15:39.976752 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-05-28 17:15:39.976760 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-05-28 17:15:39.976767 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-05-28 17:15:39.976775 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-05-28 17:15:39.976783 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-05-28 17:15:39.976791 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-05-28 17:15:39.976798 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-05-28 17:15:39.976806 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-05-28 17:15:39.976814 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-05-28 17:15:39.976822 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-05-28 17:15:39.976829 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-05-28 17:15:39.976837 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-05-28 17:15:39.976845 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-05-28 17:15:39.976853 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-05-28 17:15:39.976860 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-05-28 17:15:39.976868 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-05-28 17:15:39.976876 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-05-28 17:15:39.976884 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-05-28 17:15:39.976892 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-05-28 17:15:39.976899 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-05-28 17:15:39.976907 | orchestrator | 2025-05-28 17:15:39.976915 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-05-28 17:15:39.976923 | orchestrator | Wednesday 28 May 2025 17:15:38 +0000 (0:00:11.698) 0:04:15.548 ********* 2025-05-28 17:15:39.976931 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:15:39.976945 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:15:39.976953 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:15:39.976960 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:15:39.976973 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:15:39.976981 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:15:39.976989 | orchestrator | 2025-05-28 17:15:39.976997 | orchestrator | TASK [Manage taints] *********************************************************** 2025-05-28 17:15:39.977005 | orchestrator | Wednesday 28 May 2025 17:15:39 +0000 (0:00:00.473) 0:04:16.021 ********* 2025-05-28 17:15:39.977013 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:15:39.977020 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:15:39.977028 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:15:39.977036 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:15:39.977044 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:15:39.977052 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:15:39.977059 | orchestrator | 2025-05-28 17:15:39.977067 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 17:15:39.977075 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 17:15:39.977085 | orchestrator | testbed-node-0 : ok=46  changed=21  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-05-28 17:15:39.977098 | orchestrator | testbed-node-1 : ok=34  changed=14  unreachable=0 failed=0 skipped=24  rescued=0 ignored=0 2025-05-28 17:15:39.977106 | orchestrator | testbed-node-2 : ok=34  changed=14  unreachable=0 failed=0 skipped=24  rescued=0 ignored=0 2025-05-28 17:15:39.977115 | orchestrator | testbed-node-3 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-05-28 17:15:39.977122 | orchestrator | testbed-node-4 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-05-28 17:15:39.977130 | orchestrator | testbed-node-5 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-05-28 17:15:39.977138 | orchestrator | 2025-05-28 17:15:39.977146 | orchestrator | 2025-05-28 17:15:39.977154 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 17:15:39.977162 | orchestrator | Wednesday 28 May 2025 17:15:39 +0000 (0:00:00.527) 0:04:16.549 ********* 2025-05-28 17:15:39.977170 | orchestrator | =============================================================================== 2025-05-28 17:15:39.977178 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 55.10s 2025-05-28 17:15:39.977185 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 48.41s 2025-05-28 17:15:39.977193 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 13.55s 2025-05-28 17:15:39.977201 | orchestrator | Manage labels ---------------------------------------------------------- 11.70s 2025-05-28 17:15:39.977209 | orchestrator | kubectl : Install required packages ------------------------------------ 11.18s 2025-05-28 17:15:39.977217 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 10.02s 2025-05-28 17:15:39.977224 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 6.68s 2025-05-28 17:15:39.977232 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 6.46s 2025-05-28 17:15:39.977240 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 6.13s 2025-05-28 17:15:39.977247 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 4.57s 2025-05-28 17:15:39.977255 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 3.23s 2025-05-28 17:15:39.977263 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.11s 2025-05-28 17:15:39.977277 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.81s 2025-05-28 17:15:39.977285 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.42s 2025-05-28 17:15:39.977293 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.42s 2025-05-28 17:15:39.977300 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 2.26s 2025-05-28 17:15:39.977308 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 2.18s 2025-05-28 17:15:39.977316 | orchestrator | k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml --- 1.71s 2025-05-28 17:15:39.977324 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 1.63s 2025-05-28 17:15:39.977331 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 1.57s 2025-05-28 17:15:39.977339 | orchestrator | 2025-05-28 17:15:39 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:15:39.977469 | orchestrator | 2025-05-28 17:15:39 | INFO  | Task 1ea0541c-b057-47d5-b02b-8a8ffc1acf6d is in state STARTED 2025-05-28 17:15:39.977484 | orchestrator | 2025-05-28 17:15:39 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:15:43.004873 | orchestrator | 2025-05-28 17:15:43 | INFO  | Task 6b838928-2e76-4cfa-82bc-f03285ede25b is in state STARTED 2025-05-28 17:15:43.005305 | orchestrator | 2025-05-28 17:15:43 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:15:43.008979 | orchestrator | 2025-05-28 17:15:43 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:15:43.009223 | orchestrator | 2025-05-28 17:15:43 | INFO  | Task 30a0b4a5-8663-4fa5-b793-73077898a278 is in state STARTED 2025-05-28 17:15:43.010119 | orchestrator | 2025-05-28 17:15:43 | INFO  | Task 1ea0541c-b057-47d5-b02b-8a8ffc1acf6d is in state STARTED 2025-05-28 17:15:43.010670 | orchestrator | 2025-05-28 17:15:43 | INFO  | Task 0c213c23-b33b-4587-b871-b74a3e62f47f is in state STARTED 2025-05-28 17:15:43.011010 | orchestrator | 2025-05-28 17:15:43 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:15:46.054463 | orchestrator | 2025-05-28 17:15:46 | INFO  | Task 6b838928-2e76-4cfa-82bc-f03285ede25b is in state STARTED 2025-05-28 17:15:46.054597 | orchestrator | 2025-05-28 17:15:46 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:15:46.054700 | orchestrator | 2025-05-28 17:15:46 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:15:46.055091 | orchestrator | 2025-05-28 17:15:46 | INFO  | Task 30a0b4a5-8663-4fa5-b793-73077898a278 is in state STARTED 2025-05-28 17:15:46.055749 | orchestrator | 2025-05-28 17:15:46 | INFO  | Task 1ea0541c-b057-47d5-b02b-8a8ffc1acf6d is in state STARTED 2025-05-28 17:15:46.059195 | orchestrator | 2025-05-28 17:15:46 | INFO  | Task 0c213c23-b33b-4587-b871-b74a3e62f47f is in state STARTED 2025-05-28 17:15:46.059238 | orchestrator | 2025-05-28 17:15:46 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:15:49.100886 | orchestrator | 2025-05-28 17:15:49 | INFO  | Task 6b838928-2e76-4cfa-82bc-f03285ede25b is in state STARTED 2025-05-28 17:15:49.101028 | orchestrator | 2025-05-28 17:15:49 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:15:49.102556 | orchestrator | 2025-05-28 17:15:49 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:15:49.102584 | orchestrator | 2025-05-28 17:15:49 | INFO  | Task 30a0b4a5-8663-4fa5-b793-73077898a278 is in state SUCCESS 2025-05-28 17:15:49.103481 | orchestrator | 2025-05-28 17:15:49 | INFO  | Task 1ea0541c-b057-47d5-b02b-8a8ffc1acf6d is in state STARTED 2025-05-28 17:15:49.104333 | orchestrator | 2025-05-28 17:15:49 | INFO  | Task 0c213c23-b33b-4587-b871-b74a3e62f47f is in state STARTED 2025-05-28 17:15:49.104355 | orchestrator | 2025-05-28 17:15:49 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:15:52.144399 | orchestrator | 2025-05-28 17:15:52 | INFO  | Task 6b838928-2e76-4cfa-82bc-f03285ede25b is in state STARTED 2025-05-28 17:15:52.144870 | orchestrator | 2025-05-28 17:15:52 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:15:52.147836 | orchestrator | 2025-05-28 17:15:52 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:15:52.149640 | orchestrator | 2025-05-28 17:15:52 | INFO  | Task 1ea0541c-b057-47d5-b02b-8a8ffc1acf6d is in state STARTED 2025-05-28 17:15:52.151537 | orchestrator | 2025-05-28 17:15:52 | INFO  | Task 0c213c23-b33b-4587-b871-b74a3e62f47f is in state SUCCESS 2025-05-28 17:15:52.152261 | orchestrator | 2025-05-28 17:15:52 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:15:55.193541 | orchestrator | 2025-05-28 17:15:55 | INFO  | Task 6b838928-2e76-4cfa-82bc-f03285ede25b is in state STARTED 2025-05-28 17:15:55.193722 | orchestrator | 2025-05-28 17:15:55 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:15:55.195912 | orchestrator | 2025-05-28 17:15:55 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:15:55.198821 | orchestrator | 2025-05-28 17:15:55 | INFO  | Task 1ea0541c-b057-47d5-b02b-8a8ffc1acf6d is in state STARTED 2025-05-28 17:15:55.198910 | orchestrator | 2025-05-28 17:15:55 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:15:58.247045 | orchestrator | 2025-05-28 17:15:58 | INFO  | Task 6b838928-2e76-4cfa-82bc-f03285ede25b is in state STARTED 2025-05-28 17:15:58.248919 | orchestrator | 2025-05-28 17:15:58 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:15:58.249761 | orchestrator | 2025-05-28 17:15:58 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:15:58.250692 | orchestrator | 2025-05-28 17:15:58 | INFO  | Task 1ea0541c-b057-47d5-b02b-8a8ffc1acf6d is in state STARTED 2025-05-28 17:15:58.251080 | orchestrator | 2025-05-28 17:15:58 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:16:01.305403 | orchestrator | 2025-05-28 17:16:01 | INFO  | Task 6b838928-2e76-4cfa-82bc-f03285ede25b is in state STARTED 2025-05-28 17:16:01.305532 | orchestrator | 2025-05-28 17:16:01 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:16:01.305547 | orchestrator | 2025-05-28 17:16:01 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:16:01.305560 | orchestrator | 2025-05-28 17:16:01 | INFO  | Task 1ea0541c-b057-47d5-b02b-8a8ffc1acf6d is in state STARTED 2025-05-28 17:16:01.305572 | orchestrator | 2025-05-28 17:16:01 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:16:04.359721 | orchestrator | 2025-05-28 17:16:04 | INFO  | Task 6b838928-2e76-4cfa-82bc-f03285ede25b is in state STARTED 2025-05-28 17:16:04.363079 | orchestrator | 2025-05-28 17:16:04 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:16:04.368114 | orchestrator | 2025-05-28 17:16:04 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:16:04.373345 | orchestrator | 2025-05-28 17:16:04 | INFO  | Task 1ea0541c-b057-47d5-b02b-8a8ffc1acf6d is in state STARTED 2025-05-28 17:16:04.373385 | orchestrator | 2025-05-28 17:16:04 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:16:07.441040 | orchestrator | 2025-05-28 17:16:07 | INFO  | Task 6b838928-2e76-4cfa-82bc-f03285ede25b is in state STARTED 2025-05-28 17:16:07.443008 | orchestrator | 2025-05-28 17:16:07 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:16:07.445114 | orchestrator | 2025-05-28 17:16:07 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:16:07.446774 | orchestrator | 2025-05-28 17:16:07 | INFO  | Task 1ea0541c-b057-47d5-b02b-8a8ffc1acf6d is in state STARTED 2025-05-28 17:16:07.446973 | orchestrator | 2025-05-28 17:16:07 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:16:10.483233 | orchestrator | 2025-05-28 17:16:10 | INFO  | Task 6b838928-2e76-4cfa-82bc-f03285ede25b is in state STARTED 2025-05-28 17:16:10.483767 | orchestrator | 2025-05-28 17:16:10 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:16:10.484819 | orchestrator | 2025-05-28 17:16:10 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:16:10.485924 | orchestrator | 2025-05-28 17:16:10 | INFO  | Task 1ea0541c-b057-47d5-b02b-8a8ffc1acf6d is in state STARTED 2025-05-28 17:16:10.485942 | orchestrator | 2025-05-28 17:16:10 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:16:13.550372 | orchestrator | 2025-05-28 17:16:13 | INFO  | Task 6b838928-2e76-4cfa-82bc-f03285ede25b is in state STARTED 2025-05-28 17:16:13.550508 | orchestrator | 2025-05-28 17:16:13 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:16:13.550950 | orchestrator | 2025-05-28 17:16:13 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:16:13.552959 | orchestrator | 2025-05-28 17:16:13 | INFO  | Task 1ea0541c-b057-47d5-b02b-8a8ffc1acf6d is in state STARTED 2025-05-28 17:16:13.552990 | orchestrator | 2025-05-28 17:16:13 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:16:16.611333 | orchestrator | 2025-05-28 17:16:16 | INFO  | Task 6b838928-2e76-4cfa-82bc-f03285ede25b is in state STARTED 2025-05-28 17:16:16.614215 | orchestrator | 2025-05-28 17:16:16 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:16:16.615953 | orchestrator | 2025-05-28 17:16:16 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:16:16.622313 | orchestrator | 2025-05-28 17:16:16 | INFO  | Task 1ea0541c-b057-47d5-b02b-8a8ffc1acf6d is in state STARTED 2025-05-28 17:16:16.622351 | orchestrator | 2025-05-28 17:16:16 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:16:19.689008 | orchestrator | 2025-05-28 17:16:19 | INFO  | Task 6b838928-2e76-4cfa-82bc-f03285ede25b is in state STARTED 2025-05-28 17:16:19.690607 | orchestrator | 2025-05-28 17:16:19 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:16:19.691534 | orchestrator | 2025-05-28 17:16:19 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:16:19.692676 | orchestrator | 2025-05-28 17:16:19 | INFO  | Task 1ea0541c-b057-47d5-b02b-8a8ffc1acf6d is in state STARTED 2025-05-28 17:16:19.692725 | orchestrator | 2025-05-28 17:16:19 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:16:22.738522 | orchestrator | 2025-05-28 17:16:22 | INFO  | Task 6b838928-2e76-4cfa-82bc-f03285ede25b is in state STARTED 2025-05-28 17:16:22.739836 | orchestrator | 2025-05-28 17:16:22 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:16:22.741326 | orchestrator | 2025-05-28 17:16:22 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:16:22.743077 | orchestrator | 2025-05-28 17:16:22 | INFO  | Task 1ea0541c-b057-47d5-b02b-8a8ffc1acf6d is in state STARTED 2025-05-28 17:16:22.743108 | orchestrator | 2025-05-28 17:16:22 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:16:25.793508 | orchestrator | 2025-05-28 17:16:25 | INFO  | Task 6b838928-2e76-4cfa-82bc-f03285ede25b is in state STARTED 2025-05-28 17:16:25.796402 | orchestrator | 2025-05-28 17:16:25 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:16:25.799011 | orchestrator | 2025-05-28 17:16:25 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:16:25.800407 | orchestrator | 2025-05-28 17:16:25 | INFO  | Task 1ea0541c-b057-47d5-b02b-8a8ffc1acf6d is in state STARTED 2025-05-28 17:16:25.800435 | orchestrator | 2025-05-28 17:16:25 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:16:28.860144 | orchestrator | 2025-05-28 17:16:28 | INFO  | Task 6b838928-2e76-4cfa-82bc-f03285ede25b is in state STARTED 2025-05-28 17:16:28.860902 | orchestrator | 2025-05-28 17:16:28 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:16:28.863103 | orchestrator | 2025-05-28 17:16:28 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:16:28.865351 | orchestrator | 2025-05-28 17:16:28 | INFO  | Task 1ea0541c-b057-47d5-b02b-8a8ffc1acf6d is in state STARTED 2025-05-28 17:16:28.865379 | orchestrator | 2025-05-28 17:16:28 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:16:31.907197 | orchestrator | 2025-05-28 17:16:31 | INFO  | Task 6b838928-2e76-4cfa-82bc-f03285ede25b is in state STARTED 2025-05-28 17:16:31.908472 | orchestrator | 2025-05-28 17:16:31 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:16:31.909651 | orchestrator | 2025-05-28 17:16:31 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:16:31.912329 | orchestrator | 2025-05-28 17:16:31 | INFO  | Task 1ea0541c-b057-47d5-b02b-8a8ffc1acf6d is in state STARTED 2025-05-28 17:16:31.912362 | orchestrator | 2025-05-28 17:16:31 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:16:34.963390 | orchestrator | 2025-05-28 17:16:34 | INFO  | Task 6b838928-2e76-4cfa-82bc-f03285ede25b is in state STARTED 2025-05-28 17:16:34.966092 | orchestrator | 2025-05-28 17:16:34 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:16:34.968455 | orchestrator | 2025-05-28 17:16:34 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:16:34.969885 | orchestrator | 2025-05-28 17:16:34 | INFO  | Task 1ea0541c-b057-47d5-b02b-8a8ffc1acf6d is in state STARTED 2025-05-28 17:16:34.970257 | orchestrator | 2025-05-28 17:16:34 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:16:38.003702 | orchestrator | 2025-05-28 17:16:38 | INFO  | Task 6b838928-2e76-4cfa-82bc-f03285ede25b is in state STARTED 2025-05-28 17:16:38.004315 | orchestrator | 2025-05-28 17:16:38 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:16:38.004884 | orchestrator | 2025-05-28 17:16:38 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:16:38.005905 | orchestrator | 2025-05-28 17:16:38 | INFO  | Task 1ea0541c-b057-47d5-b02b-8a8ffc1acf6d is in state STARTED 2025-05-28 17:16:38.006414 | orchestrator | 2025-05-28 17:16:38 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:16:41.051268 | orchestrator | 2025-05-28 17:16:41 | INFO  | Task 6b838928-2e76-4cfa-82bc-f03285ede25b is in state SUCCESS 2025-05-28 17:16:41.053900 | orchestrator | 2025-05-28 17:16:41.053967 | orchestrator | 2025-05-28 17:16:41.054100 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-05-28 17:16:41.054128 | orchestrator | 2025-05-28 17:16:41.054147 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-05-28 17:16:41.054167 | orchestrator | Wednesday 28 May 2025 17:15:44 +0000 (0:00:00.195) 0:00:00.195 ********* 2025-05-28 17:16:41.054187 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-05-28 17:16:41.054206 | orchestrator | 2025-05-28 17:16:41.054225 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-05-28 17:16:41.054243 | orchestrator | Wednesday 28 May 2025 17:15:44 +0000 (0:00:00.822) 0:00:01.017 ********* 2025-05-28 17:16:41.054264 | orchestrator | changed: [testbed-manager] 2025-05-28 17:16:41.054283 | orchestrator | 2025-05-28 17:16:41.054302 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-05-28 17:16:41.054322 | orchestrator | Wednesday 28 May 2025 17:15:46 +0000 (0:00:01.134) 0:00:02.151 ********* 2025-05-28 17:16:41.054342 | orchestrator | changed: [testbed-manager] 2025-05-28 17:16:41.054360 | orchestrator | 2025-05-28 17:16:41.054380 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 17:16:41.054399 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 17:16:41.054421 | orchestrator | 2025-05-28 17:16:41.054440 | orchestrator | 2025-05-28 17:16:41.054460 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 17:16:41.054481 | orchestrator | Wednesday 28 May 2025 17:15:46 +0000 (0:00:00.396) 0:00:02.548 ********* 2025-05-28 17:16:41.054494 | orchestrator | =============================================================================== 2025-05-28 17:16:41.054506 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.13s 2025-05-28 17:16:41.054519 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.82s 2025-05-28 17:16:41.054531 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.40s 2025-05-28 17:16:41.054544 | orchestrator | 2025-05-28 17:16:41.054698 | orchestrator | 2025-05-28 17:16:41.054732 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-05-28 17:16:41.054745 | orchestrator | 2025-05-28 17:16:41.054757 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-05-28 17:16:41.054770 | orchestrator | Wednesday 28 May 2025 17:15:43 +0000 (0:00:00.174) 0:00:00.174 ********* 2025-05-28 17:16:41.054783 | orchestrator | ok: [testbed-manager] 2025-05-28 17:16:41.054797 | orchestrator | 2025-05-28 17:16:41.054809 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-05-28 17:16:41.054819 | orchestrator | Wednesday 28 May 2025 17:15:44 +0000 (0:00:00.485) 0:00:00.660 ********* 2025-05-28 17:16:41.054830 | orchestrator | ok: [testbed-manager] 2025-05-28 17:16:41.054841 | orchestrator | 2025-05-28 17:16:41.054885 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-05-28 17:16:41.054897 | orchestrator | Wednesday 28 May 2025 17:15:44 +0000 (0:00:00.524) 0:00:01.184 ********* 2025-05-28 17:16:41.054908 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-05-28 17:16:41.054919 | orchestrator | 2025-05-28 17:16:41.054930 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-05-28 17:16:41.054941 | orchestrator | Wednesday 28 May 2025 17:15:45 +0000 (0:00:00.696) 0:00:01.881 ********* 2025-05-28 17:16:41.054952 | orchestrator | changed: [testbed-manager] 2025-05-28 17:16:41.054963 | orchestrator | 2025-05-28 17:16:41.054974 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-05-28 17:16:41.054985 | orchestrator | Wednesday 28 May 2025 17:15:46 +0000 (0:00:01.091) 0:00:02.973 ********* 2025-05-28 17:16:41.054995 | orchestrator | changed: [testbed-manager] 2025-05-28 17:16:41.055006 | orchestrator | 2025-05-28 17:16:41.055017 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-05-28 17:16:41.055028 | orchestrator | Wednesday 28 May 2025 17:15:47 +0000 (0:00:00.530) 0:00:03.503 ********* 2025-05-28 17:16:41.055053 | orchestrator | changed: [testbed-manager -> localhost] 2025-05-28 17:16:41.055065 | orchestrator | 2025-05-28 17:16:41.055076 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-05-28 17:16:41.055087 | orchestrator | Wednesday 28 May 2025 17:15:48 +0000 (0:00:01.390) 0:00:04.894 ********* 2025-05-28 17:16:41.055097 | orchestrator | changed: [testbed-manager -> localhost] 2025-05-28 17:16:41.055108 | orchestrator | 2025-05-28 17:16:41.055119 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-05-28 17:16:41.055130 | orchestrator | Wednesday 28 May 2025 17:15:49 +0000 (0:00:00.710) 0:00:05.604 ********* 2025-05-28 17:16:41.055141 | orchestrator | ok: [testbed-manager] 2025-05-28 17:16:41.055152 | orchestrator | 2025-05-28 17:16:41.055163 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-05-28 17:16:41.055174 | orchestrator | Wednesday 28 May 2025 17:15:49 +0000 (0:00:00.427) 0:00:06.031 ********* 2025-05-28 17:16:41.055184 | orchestrator | ok: [testbed-manager] 2025-05-28 17:16:41.055195 | orchestrator | 2025-05-28 17:16:41.055206 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 17:16:41.055218 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 17:16:41.055229 | orchestrator | 2025-05-28 17:16:41.055244 | orchestrator | 2025-05-28 17:16:41.055262 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 17:16:41.055280 | orchestrator | Wednesday 28 May 2025 17:15:50 +0000 (0:00:00.298) 0:00:06.330 ********* 2025-05-28 17:16:41.055298 | orchestrator | =============================================================================== 2025-05-28 17:16:41.055317 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.39s 2025-05-28 17:16:41.055335 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.09s 2025-05-28 17:16:41.055353 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.71s 2025-05-28 17:16:41.055394 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.70s 2025-05-28 17:16:41.055413 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.53s 2025-05-28 17:16:41.055431 | orchestrator | Create .kube directory -------------------------------------------------- 0.52s 2025-05-28 17:16:41.055449 | orchestrator | Get home directory of operator user ------------------------------------- 0.49s 2025-05-28 17:16:41.055468 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.43s 2025-05-28 17:16:41.055486 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.30s 2025-05-28 17:16:41.055505 | orchestrator | 2025-05-28 17:16:41.055523 | orchestrator | 2025-05-28 17:16:41.055543 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-05-28 17:16:41.055593 | orchestrator | 2025-05-28 17:16:41.055612 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-05-28 17:16:41.055629 | orchestrator | Wednesday 28 May 2025 17:14:23 +0000 (0:00:00.099) 0:00:00.099 ********* 2025-05-28 17:16:41.055647 | orchestrator | ok: [localhost] => { 2025-05-28 17:16:41.055667 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-05-28 17:16:41.055686 | orchestrator | } 2025-05-28 17:16:41.055706 | orchestrator | 2025-05-28 17:16:41.055722 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-05-28 17:16:41.055733 | orchestrator | Wednesday 28 May 2025 17:14:23 +0000 (0:00:00.036) 0:00:00.135 ********* 2025-05-28 17:16:41.055795 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-05-28 17:16:41.055809 | orchestrator | ...ignoring 2025-05-28 17:16:41.055820 | orchestrator | 2025-05-28 17:16:41.055831 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-05-28 17:16:41.055855 | orchestrator | Wednesday 28 May 2025 17:14:26 +0000 (0:00:02.972) 0:00:03.108 ********* 2025-05-28 17:16:41.055866 | orchestrator | skipping: [localhost] 2025-05-28 17:16:41.055877 | orchestrator | 2025-05-28 17:16:41.055896 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-05-28 17:16:41.055907 | orchestrator | Wednesday 28 May 2025 17:14:26 +0000 (0:00:00.057) 0:00:03.165 ********* 2025-05-28 17:16:41.055918 | orchestrator | ok: [localhost] 2025-05-28 17:16:41.055929 | orchestrator | 2025-05-28 17:16:41.055940 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-28 17:16:41.055951 | orchestrator | 2025-05-28 17:16:41.055962 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-28 17:16:41.055973 | orchestrator | Wednesday 28 May 2025 17:14:27 +0000 (0:00:00.147) 0:00:03.313 ********* 2025-05-28 17:16:41.055983 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:16:41.055994 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:16:41.056006 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:16:41.056026 | orchestrator | 2025-05-28 17:16:41.056045 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-28 17:16:41.056064 | orchestrator | Wednesday 28 May 2025 17:14:27 +0000 (0:00:00.493) 0:00:03.807 ********* 2025-05-28 17:16:41.056083 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-05-28 17:16:41.056101 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-05-28 17:16:41.056120 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-05-28 17:16:41.056140 | orchestrator | 2025-05-28 17:16:41.056160 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-05-28 17:16:41.056180 | orchestrator | 2025-05-28 17:16:41.056192 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-05-28 17:16:41.056203 | orchestrator | Wednesday 28 May 2025 17:14:28 +0000 (0:00:01.250) 0:00:05.057 ********* 2025-05-28 17:16:41.056214 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:16:41.056225 | orchestrator | 2025-05-28 17:16:41.056235 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-05-28 17:16:41.056246 | orchestrator | Wednesday 28 May 2025 17:14:29 +0000 (0:00:00.846) 0:00:05.904 ********* 2025-05-28 17:16:41.056257 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:16:41.056267 | orchestrator | 2025-05-28 17:16:41.056278 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-05-28 17:16:41.056289 | orchestrator | Wednesday 28 May 2025 17:14:31 +0000 (0:00:01.728) 0:00:07.632 ********* 2025-05-28 17:16:41.056299 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:16:41.056310 | orchestrator | 2025-05-28 17:16:41.056321 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-05-28 17:16:41.056331 | orchestrator | Wednesday 28 May 2025 17:14:31 +0000 (0:00:00.323) 0:00:07.956 ********* 2025-05-28 17:16:41.056342 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:16:41.056352 | orchestrator | 2025-05-28 17:16:41.056363 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-05-28 17:16:41.056374 | orchestrator | Wednesday 28 May 2025 17:14:32 +0000 (0:00:00.361) 0:00:08.317 ********* 2025-05-28 17:16:41.056384 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:16:41.056395 | orchestrator | 2025-05-28 17:16:41.056405 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-05-28 17:16:41.056416 | orchestrator | Wednesday 28 May 2025 17:14:32 +0000 (0:00:00.361) 0:00:08.679 ********* 2025-05-28 17:16:41.056427 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:16:41.056437 | orchestrator | 2025-05-28 17:16:41.056448 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-05-28 17:16:41.056458 | orchestrator | Wednesday 28 May 2025 17:14:32 +0000 (0:00:00.524) 0:00:09.204 ********* 2025-05-28 17:16:41.056470 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:16:41.056488 | orchestrator | 2025-05-28 17:16:41.056502 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-05-28 17:16:41.056538 | orchestrator | Wednesday 28 May 2025 17:14:33 +0000 (0:00:00.637) 0:00:09.841 ********* 2025-05-28 17:16:41.056588 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:16:41.056607 | orchestrator | 2025-05-28 17:16:41.056626 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-05-28 17:16:41.056644 | orchestrator | Wednesday 28 May 2025 17:14:34 +0000 (0:00:00.822) 0:00:10.664 ********* 2025-05-28 17:16:41.056661 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:16:41.056679 | orchestrator | 2025-05-28 17:16:41.056697 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-05-28 17:16:41.056715 | orchestrator | Wednesday 28 May 2025 17:14:34 +0000 (0:00:00.411) 0:00:11.075 ********* 2025-05-28 17:16:41.056734 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:16:41.056752 | orchestrator | 2025-05-28 17:16:41.056771 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-05-28 17:16:41.056788 | orchestrator | Wednesday 28 May 2025 17:14:35 +0000 (0:00:00.414) 0:00:11.490 ********* 2025-05-28 17:16:41.056824 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-28 17:16:41.056853 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-28 17:16:41.056876 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-28 17:16:41.056904 | orchestrator | 2025-05-28 17:16:41.056916 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-05-28 17:16:41.056927 | orchestrator | Wednesday 28 May 2025 17:14:36 +0000 (0:00:01.207) 0:00:12.698 ********* 2025-05-28 17:16:41.056951 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-28 17:16:41.056969 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-28 17:16:41.056982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-28 17:16:41.056994 | orchestrator | 2025-05-28 17:16:41.057004 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-05-28 17:16:41.057015 | orchestrator | Wednesday 28 May 2025 17:14:38 +0000 (0:00:01.632) 0:00:14.330 ********* 2025-05-28 17:16:41.057034 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-05-28 17:16:41.057046 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-05-28 17:16:41.057056 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-05-28 17:16:41.057067 | orchestrator | 2025-05-28 17:16:41.057078 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-05-28 17:16:41.057089 | orchestrator | Wednesday 28 May 2025 17:14:40 +0000 (0:00:02.306) 0:00:16.636 ********* 2025-05-28 17:16:41.057100 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-05-28 17:16:41.057111 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-05-28 17:16:41.057173 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-05-28 17:16:41.057186 | orchestrator | 2025-05-28 17:16:41.057406 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-05-28 17:16:41.057432 | orchestrator | Wednesday 28 May 2025 17:14:44 +0000 (0:00:03.962) 0:00:20.599 ********* 2025-05-28 17:16:41.057443 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-05-28 17:16:41.057455 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-05-28 17:16:41.057466 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-05-28 17:16:41.057477 | orchestrator | 2025-05-28 17:16:41.057492 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-05-28 17:16:41.057509 | orchestrator | Wednesday 28 May 2025 17:14:45 +0000 (0:00:01.397) 0:00:21.997 ********* 2025-05-28 17:16:41.057536 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-05-28 17:16:41.057638 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-05-28 17:16:41.057658 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-05-28 17:16:41.057675 | orchestrator | 2025-05-28 17:16:41.057691 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-05-28 17:16:41.057709 | orchestrator | Wednesday 28 May 2025 17:14:47 +0000 (0:00:01.900) 0:00:23.897 ********* 2025-05-28 17:16:41.057727 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-05-28 17:16:41.057743 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-05-28 17:16:41.057761 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-05-28 17:16:41.057780 | orchestrator | 2025-05-28 17:16:41.057797 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-05-28 17:16:41.057815 | orchestrator | Wednesday 28 May 2025 17:14:49 +0000 (0:00:01.621) 0:00:25.519 ********* 2025-05-28 17:16:41.057834 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-05-28 17:16:41.057863 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-05-28 17:16:41.057881 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-05-28 17:16:41.057899 | orchestrator | 2025-05-28 17:16:41.057917 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-05-28 17:16:41.057935 | orchestrator | Wednesday 28 May 2025 17:14:51 +0000 (0:00:01.898) 0:00:27.417 ********* 2025-05-28 17:16:41.057950 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:16:41.057967 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:16:41.057983 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:16:41.058001 | orchestrator | 2025-05-28 17:16:41.058052 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-05-28 17:16:41.058079 | orchestrator | Wednesday 28 May 2025 17:14:52 +0000 (0:00:01.076) 0:00:28.494 ********* 2025-05-28 17:16:41.058093 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-28 17:16:41.058119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-28 17:16:41.058131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-28 17:16:41.058141 | orchestrator | 2025-05-28 17:16:41.058151 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-05-28 17:16:41.058161 | orchestrator | Wednesday 28 May 2025 17:14:53 +0000 (0:00:01.600) 0:00:30.095 ********* 2025-05-28 17:16:41.058171 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:16:41.058181 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:16:41.058190 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:16:41.058207 | orchestrator | 2025-05-28 17:16:41.058262 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-05-28 17:16:41.058274 | orchestrator | Wednesday 28 May 2025 17:14:54 +0000 (0:00:01.104) 0:00:31.199 ********* 2025-05-28 17:16:41.058290 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:16:41.058300 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:16:41.058309 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:16:41.058319 | orchestrator | 2025-05-28 17:16:41.058328 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-05-28 17:16:41.058339 | orchestrator | Wednesday 28 May 2025 17:15:02 +0000 (0:00:07.583) 0:00:38.782 ********* 2025-05-28 17:16:41.058349 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:16:41.058367 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:16:41.058384 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:16:41.058401 | orchestrator | 2025-05-28 17:16:41.058417 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-05-28 17:16:41.058435 | orchestrator | 2025-05-28 17:16:41.058452 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-05-28 17:16:41.058470 | orchestrator | Wednesday 28 May 2025 17:15:02 +0000 (0:00:00.303) 0:00:39.086 ********* 2025-05-28 17:16:41.058481 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:16:41.058491 | orchestrator | 2025-05-28 17:16:41.058501 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-05-28 17:16:41.058510 | orchestrator | Wednesday 28 May 2025 17:15:03 +0000 (0:00:00.611) 0:00:39.697 ********* 2025-05-28 17:16:41.058520 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:16:41.058529 | orchestrator | 2025-05-28 17:16:41.058539 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-05-28 17:16:41.058578 | orchestrator | Wednesday 28 May 2025 17:15:03 +0000 (0:00:00.228) 0:00:39.926 ********* 2025-05-28 17:16:41.058588 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:16:41.058598 | orchestrator | 2025-05-28 17:16:41.058607 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-05-28 17:16:41.058617 | orchestrator | Wednesday 28 May 2025 17:15:10 +0000 (0:00:06.671) 0:00:46.597 ********* 2025-05-28 17:16:41.058626 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:16:41.058636 | orchestrator | 2025-05-28 17:16:41.058645 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-05-28 17:16:41.058655 | orchestrator | 2025-05-28 17:16:41.058664 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-05-28 17:16:41.058674 | orchestrator | Wednesday 28 May 2025 17:15:59 +0000 (0:00:49.655) 0:01:36.253 ********* 2025-05-28 17:16:41.058683 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:16:41.058693 | orchestrator | 2025-05-28 17:16:41.058703 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-05-28 17:16:41.058721 | orchestrator | Wednesday 28 May 2025 17:16:00 +0000 (0:00:00.625) 0:01:36.878 ********* 2025-05-28 17:16:41.058737 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:16:41.058753 | orchestrator | 2025-05-28 17:16:41.058769 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-05-28 17:16:41.058785 | orchestrator | Wednesday 28 May 2025 17:16:01 +0000 (0:00:00.807) 0:01:37.686 ********* 2025-05-28 17:16:41.058801 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:16:41.058817 | orchestrator | 2025-05-28 17:16:41.058833 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-05-28 17:16:41.058850 | orchestrator | Wednesday 28 May 2025 17:16:03 +0000 (0:00:02.215) 0:01:39.901 ********* 2025-05-28 17:16:41.058866 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:16:41.058882 | orchestrator | 2025-05-28 17:16:41.058898 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-05-28 17:16:41.058915 | orchestrator | 2025-05-28 17:16:41.058931 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-05-28 17:16:41.058948 | orchestrator | Wednesday 28 May 2025 17:16:19 +0000 (0:00:15.377) 0:01:55.278 ********* 2025-05-28 17:16:41.058958 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:16:41.058968 | orchestrator | 2025-05-28 17:16:41.058987 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-05-28 17:16:41.058997 | orchestrator | Wednesday 28 May 2025 17:16:19 +0000 (0:00:00.625) 0:01:55.904 ********* 2025-05-28 17:16:41.059024 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:16:41.059033 | orchestrator | 2025-05-28 17:16:41.059043 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-05-28 17:16:41.059052 | orchestrator | Wednesday 28 May 2025 17:16:19 +0000 (0:00:00.271) 0:01:56.175 ********* 2025-05-28 17:16:41.059062 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:16:41.059075 | orchestrator | 2025-05-28 17:16:41.059091 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-05-28 17:16:41.059107 | orchestrator | Wednesday 28 May 2025 17:16:21 +0000 (0:00:01.644) 0:01:57.819 ********* 2025-05-28 17:16:41.059123 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:16:41.059139 | orchestrator | 2025-05-28 17:16:41.059155 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-05-28 17:16:41.059171 | orchestrator | 2025-05-28 17:16:41.059187 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-05-28 17:16:41.059203 | orchestrator | Wednesday 28 May 2025 17:16:36 +0000 (0:00:14.788) 0:02:12.608 ********* 2025-05-28 17:16:41.059220 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:16:41.059236 | orchestrator | 2025-05-28 17:16:41.059252 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-05-28 17:16:41.059268 | orchestrator | Wednesday 28 May 2025 17:16:37 +0000 (0:00:01.077) 0:02:13.686 ********* 2025-05-28 17:16:41.059285 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-05-28 17:16:41.059302 | orchestrator | enable_outward_rabbitmq_True 2025-05-28 17:16:41.059318 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-05-28 17:16:41.059336 | orchestrator | outward_rabbitmq_restart 2025-05-28 17:16:41.059346 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:16:41.059356 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:16:41.059365 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:16:41.059374 | orchestrator | 2025-05-28 17:16:41.059384 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-05-28 17:16:41.059393 | orchestrator | skipping: no hosts matched 2025-05-28 17:16:41.059538 | orchestrator | 2025-05-28 17:16:41.059573 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-05-28 17:16:41.059583 | orchestrator | skipping: no hosts matched 2025-05-28 17:16:41.059593 | orchestrator | 2025-05-28 17:16:41.059699 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-05-28 17:16:41.059725 | orchestrator | skipping: no hosts matched 2025-05-28 17:16:41.059742 | orchestrator | 2025-05-28 17:16:41.059760 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 17:16:41.059778 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-05-28 17:16:41.059881 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-28 17:16:41.059900 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 17:16:41.059917 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 17:16:41.059933 | orchestrator | 2025-05-28 17:16:41.059949 | orchestrator | 2025-05-28 17:16:41.059965 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 17:16:41.060089 | orchestrator | Wednesday 28 May 2025 17:16:39 +0000 (0:00:02.486) 0:02:16.173 ********* 2025-05-28 17:16:41.060124 | orchestrator | =============================================================================== 2025-05-28 17:16:41.060142 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 79.82s 2025-05-28 17:16:41.060158 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 10.53s 2025-05-28 17:16:41.060189 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.58s 2025-05-28 17:16:41.060207 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 3.96s 2025-05-28 17:16:41.060224 | orchestrator | Check RabbitMQ service -------------------------------------------------- 2.97s 2025-05-28 17:16:41.060240 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.49s 2025-05-28 17:16:41.060256 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.31s 2025-05-28 17:16:41.060272 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.90s 2025-05-28 17:16:41.060290 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.90s 2025-05-28 17:16:41.060306 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.86s 2025-05-28 17:16:41.060320 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.73s 2025-05-28 17:16:41.060332 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.63s 2025-05-28 17:16:41.060343 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.62s 2025-05-28 17:16:41.060354 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.60s 2025-05-28 17:16:41.060365 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.40s 2025-05-28 17:16:41.060377 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 1.31s 2025-05-28 17:16:41.060388 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.25s 2025-05-28 17:16:41.060412 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.21s 2025-05-28 17:16:41.060424 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 1.10s 2025-05-28 17:16:41.060435 | orchestrator | Include rabbitmq post-deploy.yml ---------------------------------------- 1.08s 2025-05-28 17:16:41.060446 | orchestrator | 2025-05-28 17:16:41 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:16:41.060458 | orchestrator | 2025-05-28 17:16:41 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:16:41.060469 | orchestrator | 2025-05-28 17:16:41 | INFO  | Task 1ea0541c-b057-47d5-b02b-8a8ffc1acf6d is in state STARTED 2025-05-28 17:16:41.060481 | orchestrator | 2025-05-28 17:16:41 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:16:44.096401 | orchestrator | 2025-05-28 17:16:44 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:16:44.100422 | orchestrator | 2025-05-28 17:16:44 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:16:44.100533 | orchestrator | 2025-05-28 17:16:44 | INFO  | Task 1ea0541c-b057-47d5-b02b-8a8ffc1acf6d is in state STARTED 2025-05-28 17:16:44.100587 | orchestrator | 2025-05-28 17:16:44 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:16:47.160945 | orchestrator | 2025-05-28 17:16:47 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:16:47.162717 | orchestrator | 2025-05-28 17:16:47 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:16:47.164782 | orchestrator | 2025-05-28 17:16:47 | INFO  | Task 1ea0541c-b057-47d5-b02b-8a8ffc1acf6d is in state STARTED 2025-05-28 17:16:47.164863 | orchestrator | 2025-05-28 17:16:47 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:16:50.220822 | orchestrator | 2025-05-28 17:16:50 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:16:50.220960 | orchestrator | 2025-05-28 17:16:50 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:16:50.222628 | orchestrator | 2025-05-28 17:16:50 | INFO  | Task 1ea0541c-b057-47d5-b02b-8a8ffc1acf6d is in state STARTED 2025-05-28 17:16:50.223763 | orchestrator | 2025-05-28 17:16:50 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:16:53.288867 | orchestrator | 2025-05-28 17:16:53 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:16:53.289365 | orchestrator | 2025-05-28 17:16:53 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:16:53.290234 | orchestrator | 2025-05-28 17:16:53 | INFO  | Task 1ea0541c-b057-47d5-b02b-8a8ffc1acf6d is in state STARTED 2025-05-28 17:16:53.290261 | orchestrator | 2025-05-28 17:16:53 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:16:56.331656 | orchestrator | 2025-05-28 17:16:56 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:16:56.334482 | orchestrator | 2025-05-28 17:16:56 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:16:56.337251 | orchestrator | 2025-05-28 17:16:56 | INFO  | Task 1ea0541c-b057-47d5-b02b-8a8ffc1acf6d is in state STARTED 2025-05-28 17:16:56.337297 | orchestrator | 2025-05-28 17:16:56 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:16:59.395683 | orchestrator | 2025-05-28 17:16:59 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:16:59.398348 | orchestrator | 2025-05-28 17:16:59 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:16:59.402649 | orchestrator | 2025-05-28 17:16:59 | INFO  | Task 1ea0541c-b057-47d5-b02b-8a8ffc1acf6d is in state STARTED 2025-05-28 17:16:59.402721 | orchestrator | 2025-05-28 17:16:59 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:17:02.455488 | orchestrator | 2025-05-28 17:17:02 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:17:02.455670 | orchestrator | 2025-05-28 17:17:02 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:17:02.456293 | orchestrator | 2025-05-28 17:17:02 | INFO  | Task 1ea0541c-b057-47d5-b02b-8a8ffc1acf6d is in state STARTED 2025-05-28 17:17:02.456319 | orchestrator | 2025-05-28 17:17:02 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:17:05.503754 | orchestrator | 2025-05-28 17:17:05 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:17:05.505897 | orchestrator | 2025-05-28 17:17:05 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:17:05.506964 | orchestrator | 2025-05-28 17:17:05 | INFO  | Task 1ea0541c-b057-47d5-b02b-8a8ffc1acf6d is in state STARTED 2025-05-28 17:17:05.506993 | orchestrator | 2025-05-28 17:17:05 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:17:08.553299 | orchestrator | 2025-05-28 17:17:08 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:17:08.553598 | orchestrator | 2025-05-28 17:17:08 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:17:08.554317 | orchestrator | 2025-05-28 17:17:08 | INFO  | Task 1ea0541c-b057-47d5-b02b-8a8ffc1acf6d is in state STARTED 2025-05-28 17:17:08.554345 | orchestrator | 2025-05-28 17:17:08 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:17:11.586692 | orchestrator | 2025-05-28 17:17:11 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:17:11.587448 | orchestrator | 2025-05-28 17:17:11 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:17:11.588946 | orchestrator | 2025-05-28 17:17:11 | INFO  | Task 1ea0541c-b057-47d5-b02b-8a8ffc1acf6d is in state STARTED 2025-05-28 17:17:11.589037 | orchestrator | 2025-05-28 17:17:11 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:17:14.640461 | orchestrator | 2025-05-28 17:17:14 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:17:14.641815 | orchestrator | 2025-05-28 17:17:14 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:17:14.646614 | orchestrator | 2025-05-28 17:17:14 | INFO  | Task 1ea0541c-b057-47d5-b02b-8a8ffc1acf6d is in state STARTED 2025-05-28 17:17:14.646701 | orchestrator | 2025-05-28 17:17:14 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:17:17.695496 | orchestrator | 2025-05-28 17:17:17 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:17:17.695924 | orchestrator | 2025-05-28 17:17:17 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:17:17.697407 | orchestrator | 2025-05-28 17:17:17 | INFO  | Task 1ea0541c-b057-47d5-b02b-8a8ffc1acf6d is in state STARTED 2025-05-28 17:17:17.699934 | orchestrator | 2025-05-28 17:17:17 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:17:20.740334 | orchestrator | 2025-05-28 17:17:20 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:17:20.740855 | orchestrator | 2025-05-28 17:17:20 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:17:20.742214 | orchestrator | 2025-05-28 17:17:20 | INFO  | Task 1ea0541c-b057-47d5-b02b-8a8ffc1acf6d is in state STARTED 2025-05-28 17:17:20.742306 | orchestrator | 2025-05-28 17:17:20 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:17:23.783325 | orchestrator | 2025-05-28 17:17:23 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:17:23.783688 | orchestrator | 2025-05-28 17:17:23 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:17:23.785073 | orchestrator | 2025-05-28 17:17:23 | INFO  | Task 1ea0541c-b057-47d5-b02b-8a8ffc1acf6d is in state STARTED 2025-05-28 17:17:23.785098 | orchestrator | 2025-05-28 17:17:23 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:17:26.828235 | orchestrator | 2025-05-28 17:17:26 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:17:26.828379 | orchestrator | 2025-05-28 17:17:26 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:17:26.828555 | orchestrator | 2025-05-28 17:17:26 | INFO  | Task 1ea0541c-b057-47d5-b02b-8a8ffc1acf6d is in state STARTED 2025-05-28 17:17:26.828662 | orchestrator | 2025-05-28 17:17:26 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:17:29.878773 | orchestrator | 2025-05-28 17:17:29 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:17:29.880173 | orchestrator | 2025-05-28 17:17:29 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:17:29.882489 | orchestrator | 2025-05-28 17:17:29 | INFO  | Task 1ea0541c-b057-47d5-b02b-8a8ffc1acf6d is in state STARTED 2025-05-28 17:17:29.882715 | orchestrator | 2025-05-28 17:17:29 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:17:32.936433 | orchestrator | 2025-05-28 17:17:32 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:17:32.942143 | orchestrator | 2025-05-28 17:17:32 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:17:32.942181 | orchestrator | 2025-05-28 17:17:32 | INFO  | Task 1ea0541c-b057-47d5-b02b-8a8ffc1acf6d is in state STARTED 2025-05-28 17:17:32.942201 | orchestrator | 2025-05-28 17:17:32 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:17:35.991598 | orchestrator | 2025-05-28 17:17:35 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:17:35.992390 | orchestrator | 2025-05-28 17:17:35 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:17:35.993775 | orchestrator | 2025-05-28 17:17:35 | INFO  | Task 1ea0541c-b057-47d5-b02b-8a8ffc1acf6d is in state STARTED 2025-05-28 17:17:35.993824 | orchestrator | 2025-05-28 17:17:35 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:17:39.056107 | orchestrator | 2025-05-28 17:17:39 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:17:39.056224 | orchestrator | 2025-05-28 17:17:39 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:17:39.058202 | orchestrator | 2025-05-28 17:17:39 | INFO  | Task 1ea0541c-b057-47d5-b02b-8a8ffc1acf6d is in state STARTED 2025-05-28 17:17:39.058232 | orchestrator | 2025-05-28 17:17:39 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:17:42.121585 | orchestrator | 2025-05-28 17:17:42 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:17:42.121720 | orchestrator | 2025-05-28 17:17:42 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:17:42.122348 | orchestrator | 2025-05-28 17:17:42 | INFO  | Task 1ea0541c-b057-47d5-b02b-8a8ffc1acf6d is in state STARTED 2025-05-28 17:17:42.122612 | orchestrator | 2025-05-28 17:17:42 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:17:45.171604 | orchestrator | 2025-05-28 17:17:45 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:17:45.173260 | orchestrator | 2025-05-28 17:17:45 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:17:45.176033 | orchestrator | 2025-05-28 17:17:45 | INFO  | Task 1ea0541c-b057-47d5-b02b-8a8ffc1acf6d is in state STARTED 2025-05-28 17:17:45.176063 | orchestrator | 2025-05-28 17:17:45 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:17:48.220933 | orchestrator | 2025-05-28 17:17:48 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:17:48.222320 | orchestrator | 2025-05-28 17:17:48 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:17:48.224475 | orchestrator | 2025-05-28 17:17:48 | INFO  | Task 1ea0541c-b057-47d5-b02b-8a8ffc1acf6d is in state STARTED 2025-05-28 17:17:48.224904 | orchestrator | 2025-05-28 17:17:48 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:17:51.266555 | orchestrator | 2025-05-28 17:17:51 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:17:51.267274 | orchestrator | 2025-05-28 17:17:51 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:17:51.271002 | orchestrator | 2025-05-28 17:17:51.271048 | orchestrator | 2025-05-28 17:17:51.271063 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-28 17:17:51.271076 | orchestrator | 2025-05-28 17:17:51.271088 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-28 17:17:51.271101 | orchestrator | Wednesday 28 May 2025 17:15:18 +0000 (0:00:00.212) 0:00:00.212 ********* 2025-05-28 17:17:51.271114 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:17:51.271127 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:17:51.271139 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:17:51.271150 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:17:51.271162 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:17:51.271173 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:17:51.271185 | orchestrator | 2025-05-28 17:17:51.271197 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-28 17:17:51.271273 | orchestrator | Wednesday 28 May 2025 17:15:20 +0000 (0:00:01.399) 0:00:01.612 ********* 2025-05-28 17:17:51.271286 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-05-28 17:17:51.271297 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-05-28 17:17:51.271308 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-05-28 17:17:51.271319 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-05-28 17:17:51.271329 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-05-28 17:17:51.271340 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-05-28 17:17:51.271437 | orchestrator | 2025-05-28 17:17:51.271449 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-05-28 17:17:51.271460 | orchestrator | 2025-05-28 17:17:51.271471 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-05-28 17:17:51.271520 | orchestrator | Wednesday 28 May 2025 17:15:21 +0000 (0:00:01.474) 0:00:03.086 ********* 2025-05-28 17:17:51.271535 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 17:17:51.271547 | orchestrator | 2025-05-28 17:17:51.272553 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-05-28 17:17:51.272568 | orchestrator | Wednesday 28 May 2025 17:15:22 +0000 (0:00:01.229) 0:00:04.316 ********* 2025-05-28 17:17:51.272582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.272598 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.272610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.272629 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.272640 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.272651 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.272676 | orchestrator | 2025-05-28 17:17:51.272701 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-05-28 17:17:51.272712 | orchestrator | Wednesday 28 May 2025 17:15:24 +0000 (0:00:01.580) 0:00:05.897 ********* 2025-05-28 17:17:51.272723 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.272735 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.272745 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.272756 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.272767 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.272778 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.272789 | orchestrator | 2025-05-28 17:17:51.272800 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-05-28 17:17:51.272815 | orchestrator | Wednesday 28 May 2025 17:15:26 +0000 (0:00:02.122) 0:00:08.019 ********* 2025-05-28 17:17:51.272826 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.272837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.272864 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.272877 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.272888 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.272899 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.272910 | orchestrator | 2025-05-28 17:17:51.272921 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-05-28 17:17:51.272932 | orchestrator | Wednesday 28 May 2025 17:15:28 +0000 (0:00:01.906) 0:00:09.926 ********* 2025-05-28 17:17:51.272943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.272953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.272964 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.272980 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.272997 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.273008 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.273019 | orchestrator | 2025-05-28 17:17:51.273035 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-05-28 17:17:51.273046 | orchestrator | Wednesday 28 May 2025 17:15:31 +0000 (0:00:02.618) 0:00:12.545 ********* 2025-05-28 17:17:51.273057 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.273068 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.273080 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.273091 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.273102 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.273113 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.273123 | orchestrator | 2025-05-28 17:17:51.273134 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-05-28 17:17:51.273145 | orchestrator | Wednesday 28 May 2025 17:15:32 +0000 (0:00:01.706) 0:00:14.252 ********* 2025-05-28 17:17:51.273165 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:17:51.273177 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:17:51.273187 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:17:51.273198 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:17:51.273209 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:17:51.273219 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:17:51.273230 | orchestrator | 2025-05-28 17:17:51.273241 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-05-28 17:17:51.273252 | orchestrator | Wednesday 28 May 2025 17:15:35 +0000 (0:00:02.520) 0:00:16.772 ********* 2025-05-28 17:17:51.273263 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-05-28 17:17:51.273274 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-05-28 17:17:51.273285 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-05-28 17:17:51.273295 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-05-28 17:17:51.273306 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-05-28 17:17:51.273316 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-05-28 17:17:51.273327 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-28 17:17:51.273338 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-28 17:17:51.273354 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-28 17:17:51.273365 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-28 17:17:51.273375 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-28 17:17:51.273386 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-28 17:17:51.273397 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-28 17:17:51.273409 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-28 17:17:51.273420 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-28 17:17:51.273431 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-28 17:17:51.273442 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-28 17:17:51.273452 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-28 17:17:51.273463 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-28 17:17:51.273475 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-28 17:17:51.273559 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-28 17:17:51.273571 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-28 17:17:51.273581 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-28 17:17:51.273592 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-28 17:17:51.273603 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-28 17:17:51.273620 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-28 17:17:51.273631 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-28 17:17:51.273642 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-28 17:17:51.273652 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-28 17:17:51.273663 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-28 17:17:51.273672 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-28 17:17:51.273682 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-28 17:17:51.273692 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-28 17:17:51.273701 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-28 17:17:51.273715 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-28 17:17:51.273725 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-28 17:17:51.273734 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-05-28 17:17:51.273743 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-05-28 17:17:51.273753 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-05-28 17:17:51.273762 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-05-28 17:17:51.273772 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-05-28 17:17:51.273781 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-05-28 17:17:51.273791 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-05-28 17:17:51.273801 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-05-28 17:17:51.273817 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-05-28 17:17:51.273827 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-05-28 17:17:51.273836 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-05-28 17:17:51.273846 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-05-28 17:17:51.273855 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-05-28 17:17:51.273865 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-05-28 17:17:51.273875 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-05-28 17:17:51.273884 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-05-28 17:17:51.273893 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-05-28 17:17:51.273908 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-05-28 17:17:51.273918 | orchestrator | 2025-05-28 17:17:51.273928 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-28 17:17:51.273937 | orchestrator | Wednesday 28 May 2025 17:15:53 +0000 (0:00:18.190) 0:00:34.963 ********* 2025-05-28 17:17:51.273947 | orchestrator | 2025-05-28 17:17:51.273957 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-28 17:17:51.273966 | orchestrator | Wednesday 28 May 2025 17:15:53 +0000 (0:00:00.066) 0:00:35.030 ********* 2025-05-28 17:17:51.273975 | orchestrator | 2025-05-28 17:17:51.273985 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-28 17:17:51.273995 | orchestrator | Wednesday 28 May 2025 17:15:53 +0000 (0:00:00.059) 0:00:35.090 ********* 2025-05-28 17:17:51.274004 | orchestrator | 2025-05-28 17:17:51.274014 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-28 17:17:51.274089 | orchestrator | Wednesday 28 May 2025 17:15:53 +0000 (0:00:00.059) 0:00:35.149 ********* 2025-05-28 17:17:51.274099 | orchestrator | 2025-05-28 17:17:51.274108 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-28 17:17:51.274118 | orchestrator | Wednesday 28 May 2025 17:15:53 +0000 (0:00:00.065) 0:00:35.214 ********* 2025-05-28 17:17:51.274127 | orchestrator | 2025-05-28 17:17:51.274137 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-28 17:17:51.274146 | orchestrator | Wednesday 28 May 2025 17:15:53 +0000 (0:00:00.057) 0:00:35.272 ********* 2025-05-28 17:17:51.274155 | orchestrator | 2025-05-28 17:17:51.274165 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-05-28 17:17:51.274174 | orchestrator | Wednesday 28 May 2025 17:15:54 +0000 (0:00:00.072) 0:00:35.344 ********* 2025-05-28 17:17:51.274184 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:17:51.274193 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:17:51.274203 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:17:51.274212 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:17:51.274222 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:17:51.274231 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:17:51.274241 | orchestrator | 2025-05-28 17:17:51.274250 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-05-28 17:17:51.274260 | orchestrator | Wednesday 28 May 2025 17:15:55 +0000 (0:00:01.653) 0:00:36.997 ********* 2025-05-28 17:17:51.274269 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:17:51.274279 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:17:51.274288 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:17:51.274298 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:17:51.274311 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:17:51.274321 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:17:51.274330 | orchestrator | 2025-05-28 17:17:51.274340 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-05-28 17:17:51.274349 | orchestrator | 2025-05-28 17:17:51.274358 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-05-28 17:17:51.274368 | orchestrator | Wednesday 28 May 2025 17:16:33 +0000 (0:00:37.679) 0:01:14.676 ********* 2025-05-28 17:17:51.274378 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:17:51.274387 | orchestrator | 2025-05-28 17:17:51.274397 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-05-28 17:17:51.274406 | orchestrator | Wednesday 28 May 2025 17:16:33 +0000 (0:00:00.502) 0:01:15.179 ********* 2025-05-28 17:17:51.274416 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:17:51.274425 | orchestrator | 2025-05-28 17:17:51.274434 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-05-28 17:17:51.274444 | orchestrator | Wednesday 28 May 2025 17:16:34 +0000 (0:00:00.656) 0:01:15.836 ********* 2025-05-28 17:17:51.274460 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:17:51.274470 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:17:51.274502 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:17:51.274519 | orchestrator | 2025-05-28 17:17:51.274535 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-05-28 17:17:51.274552 | orchestrator | Wednesday 28 May 2025 17:16:35 +0000 (0:00:00.794) 0:01:16.631 ********* 2025-05-28 17:17:51.274568 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:17:51.274582 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:17:51.274592 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:17:51.274607 | orchestrator | 2025-05-28 17:17:51.274617 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-05-28 17:17:51.274627 | orchestrator | Wednesday 28 May 2025 17:16:35 +0000 (0:00:00.361) 0:01:16.992 ********* 2025-05-28 17:17:51.274637 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:17:51.274646 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:17:51.274655 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:17:51.274665 | orchestrator | 2025-05-28 17:17:51.274674 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-05-28 17:17:51.274684 | orchestrator | Wednesday 28 May 2025 17:16:35 +0000 (0:00:00.340) 0:01:17.332 ********* 2025-05-28 17:17:51.274693 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:17:51.274704 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:17:51.274719 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:17:51.274735 | orchestrator | 2025-05-28 17:17:51.274751 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-05-28 17:17:51.274766 | orchestrator | Wednesday 28 May 2025 17:16:36 +0000 (0:00:00.627) 0:01:17.960 ********* 2025-05-28 17:17:51.274781 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:17:51.274796 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:17:51.274811 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:17:51.274827 | orchestrator | 2025-05-28 17:17:51.274840 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-05-28 17:17:51.274853 | orchestrator | Wednesday 28 May 2025 17:16:37 +0000 (0:00:00.582) 0:01:18.542 ********* 2025-05-28 17:17:51.274866 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:17:51.274880 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:17:51.274894 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:17:51.274907 | orchestrator | 2025-05-28 17:17:51.274921 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-05-28 17:17:51.274935 | orchestrator | Wednesday 28 May 2025 17:16:37 +0000 (0:00:00.419) 0:01:18.962 ********* 2025-05-28 17:17:51.274950 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:17:51.274964 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:17:51.274978 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:17:51.274992 | orchestrator | 2025-05-28 17:17:51.275008 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-05-28 17:17:51.275023 | orchestrator | Wednesday 28 May 2025 17:16:37 +0000 (0:00:00.355) 0:01:19.318 ********* 2025-05-28 17:17:51.275040 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:17:51.275057 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:17:51.275073 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:17:51.275086 | orchestrator | 2025-05-28 17:17:51.275096 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-05-28 17:17:51.275106 | orchestrator | Wednesday 28 May 2025 17:16:38 +0000 (0:00:00.541) 0:01:19.859 ********* 2025-05-28 17:17:51.275133 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:17:51.275143 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:17:51.275152 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:17:51.275161 | orchestrator | 2025-05-28 17:17:51.275171 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-05-28 17:17:51.275180 | orchestrator | Wednesday 28 May 2025 17:16:38 +0000 (0:00:00.281) 0:01:20.141 ********* 2025-05-28 17:17:51.275190 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:17:51.275208 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:17:51.275221 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:17:51.275238 | orchestrator | 2025-05-28 17:17:51.275250 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-05-28 17:17:51.275262 | orchestrator | Wednesday 28 May 2025 17:16:39 +0000 (0:00:00.302) 0:01:20.443 ********* 2025-05-28 17:17:51.275277 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:17:51.275292 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:17:51.275308 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:17:51.275324 | orchestrator | 2025-05-28 17:17:51.275340 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-05-28 17:17:51.275356 | orchestrator | Wednesday 28 May 2025 17:16:39 +0000 (0:00:00.331) 0:01:20.775 ********* 2025-05-28 17:17:51.275371 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:17:51.275385 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:17:51.275401 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:17:51.275415 | orchestrator | 2025-05-28 17:17:51.275429 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-05-28 17:17:51.275461 | orchestrator | Wednesday 28 May 2025 17:16:39 +0000 (0:00:00.544) 0:01:21.319 ********* 2025-05-28 17:17:51.275499 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:17:51.275512 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:17:51.275522 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:17:51.275531 | orchestrator | 2025-05-28 17:17:51.275541 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-05-28 17:17:51.275550 | orchestrator | Wednesday 28 May 2025 17:16:40 +0000 (0:00:00.346) 0:01:21.666 ********* 2025-05-28 17:17:51.275560 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:17:51.275569 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:17:51.275578 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:17:51.275588 | orchestrator | 2025-05-28 17:17:51.275597 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-05-28 17:17:51.275607 | orchestrator | Wednesday 28 May 2025 17:16:40 +0000 (0:00:00.557) 0:01:22.224 ********* 2025-05-28 17:17:51.275616 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:17:51.275626 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:17:51.275635 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:17:51.275644 | orchestrator | 2025-05-28 17:17:51.275654 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-05-28 17:17:51.275663 | orchestrator | Wednesday 28 May 2025 17:16:41 +0000 (0:00:00.290) 0:01:22.514 ********* 2025-05-28 17:17:51.275673 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:17:51.275682 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:17:51.275691 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:17:51.275701 | orchestrator | 2025-05-28 17:17:51.275710 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-05-28 17:17:51.275720 | orchestrator | Wednesday 28 May 2025 17:16:41 +0000 (0:00:00.747) 0:01:23.262 ********* 2025-05-28 17:17:51.275729 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:17:51.275738 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:17:51.275759 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:17:51.275769 | orchestrator | 2025-05-28 17:17:51.275778 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-05-28 17:17:51.275787 | orchestrator | Wednesday 28 May 2025 17:16:42 +0000 (0:00:00.414) 0:01:23.676 ********* 2025-05-28 17:17:51.275797 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:17:51.275807 | orchestrator | 2025-05-28 17:17:51.275816 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-05-28 17:17:51.275826 | orchestrator | Wednesday 28 May 2025 17:16:42 +0000 (0:00:00.591) 0:01:24.267 ********* 2025-05-28 17:17:51.275835 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:17:51.275844 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:17:51.275863 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:17:51.275872 | orchestrator | 2025-05-28 17:17:51.275882 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-05-28 17:17:51.275891 | orchestrator | Wednesday 28 May 2025 17:16:43 +0000 (0:00:00.813) 0:01:25.081 ********* 2025-05-28 17:17:51.275901 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:17:51.275910 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:17:51.275919 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:17:51.275929 | orchestrator | 2025-05-28 17:17:51.275938 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-05-28 17:17:51.275948 | orchestrator | Wednesday 28 May 2025 17:16:44 +0000 (0:00:00.416) 0:01:25.498 ********* 2025-05-28 17:17:51.275957 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:17:51.275967 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:17:51.275976 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:17:51.275985 | orchestrator | 2025-05-28 17:17:51.275995 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-05-28 17:17:51.276004 | orchestrator | Wednesday 28 May 2025 17:16:44 +0000 (0:00:00.376) 0:01:25.875 ********* 2025-05-28 17:17:51.276014 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:17:51.276023 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:17:51.276032 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:17:51.276041 | orchestrator | 2025-05-28 17:17:51.276051 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-05-28 17:17:51.276061 | orchestrator | Wednesday 28 May 2025 17:16:44 +0000 (0:00:00.336) 0:01:26.211 ********* 2025-05-28 17:17:51.276070 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:17:51.276079 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:17:51.276088 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:17:51.276098 | orchestrator | 2025-05-28 17:17:51.276107 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-05-28 17:17:51.276117 | orchestrator | Wednesday 28 May 2025 17:16:45 +0000 (0:00:00.527) 0:01:26.738 ********* 2025-05-28 17:17:51.276126 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:17:51.276135 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:17:51.276145 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:17:51.276154 | orchestrator | 2025-05-28 17:17:51.276164 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-05-28 17:17:51.276173 | orchestrator | Wednesday 28 May 2025 17:16:45 +0000 (0:00:00.358) 0:01:27.097 ********* 2025-05-28 17:17:51.276183 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:17:51.276192 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:17:51.276201 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:17:51.276211 | orchestrator | 2025-05-28 17:17:51.276220 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-05-28 17:17:51.276230 | orchestrator | Wednesday 28 May 2025 17:16:46 +0000 (0:00:00.350) 0:01:27.447 ********* 2025-05-28 17:17:51.276239 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:17:51.276249 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:17:51.276258 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:17:51.276267 | orchestrator | 2025-05-28 17:17:51.276277 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-05-28 17:17:51.276286 | orchestrator | Wednesday 28 May 2025 17:16:46 +0000 (0:00:00.325) 0:01:27.773 ********* 2025-05-28 17:17:51.276302 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.276323 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.276339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.276354 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.276367 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.276377 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.276387 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.276397 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.276407 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.276417 | orchestrator | 2025-05-28 17:17:51.276427 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-05-28 17:17:51.276437 | orchestrator | Wednesday 28 May 2025 17:16:48 +0000 (0:00:01.716) 0:01:29.490 ********* 2025-05-28 17:17:51.276447 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.276461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.276477 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.276509 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.276525 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.276535 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.276545 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.276555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.276564 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.276574 | orchestrator | 2025-05-28 17:17:51.276584 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-05-28 17:17:51.276593 | orchestrator | Wednesday 28 May 2025 17:16:52 +0000 (0:00:04.810) 0:01:34.300 ********* 2025-05-28 17:17:51.276603 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.276613 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.276633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.276643 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.276653 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.276668 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.276678 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.276688 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.276698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.276707 | orchestrator | 2025-05-28 17:17:51.276717 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-28 17:17:51.276726 | orchestrator | Wednesday 28 May 2025 17:16:55 +0000 (0:00:02.102) 0:01:36.403 ********* 2025-05-28 17:17:51.276736 | orchestrator | 2025-05-28 17:17:51.276745 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-28 17:17:51.276755 | orchestrator | Wednesday 28 May 2025 17:16:55 +0000 (0:00:00.067) 0:01:36.470 ********* 2025-05-28 17:17:51.276764 | orchestrator | 2025-05-28 17:17:51.276774 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-28 17:17:51.276783 | orchestrator | Wednesday 28 May 2025 17:16:55 +0000 (0:00:00.064) 0:01:36.534 ********* 2025-05-28 17:17:51.276792 | orchestrator | 2025-05-28 17:17:51.276802 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-05-28 17:17:51.276811 | orchestrator | Wednesday 28 May 2025 17:16:55 +0000 (0:00:00.065) 0:01:36.600 ********* 2025-05-28 17:17:51.276821 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:17:51.276835 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:17:51.276845 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:17:51.276854 | orchestrator | 2025-05-28 17:17:51.276864 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-05-28 17:17:51.276873 | orchestrator | Wednesday 28 May 2025 17:17:02 +0000 (0:00:07.428) 0:01:44.028 ********* 2025-05-28 17:17:51.276882 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:17:51.276892 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:17:51.276901 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:17:51.276910 | orchestrator | 2025-05-28 17:17:51.276920 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-05-28 17:17:51.276929 | orchestrator | Wednesday 28 May 2025 17:17:10 +0000 (0:00:07.603) 0:01:51.632 ********* 2025-05-28 17:17:51.276939 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:17:51.276948 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:17:51.276957 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:17:51.276967 | orchestrator | 2025-05-28 17:17:51.276976 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-05-28 17:17:51.276985 | orchestrator | Wednesday 28 May 2025 17:17:12 +0000 (0:00:02.519) 0:01:54.151 ********* 2025-05-28 17:17:51.276995 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:17:51.277004 | orchestrator | 2025-05-28 17:17:51.277014 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-05-28 17:17:51.277023 | orchestrator | Wednesday 28 May 2025 17:17:12 +0000 (0:00:00.128) 0:01:54.280 ********* 2025-05-28 17:17:51.277032 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:17:51.277042 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:17:51.277051 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:17:51.277060 | orchestrator | 2025-05-28 17:17:51.277070 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-05-28 17:17:51.277079 | orchestrator | Wednesday 28 May 2025 17:17:13 +0000 (0:00:00.781) 0:01:55.061 ********* 2025-05-28 17:17:51.277089 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:17:51.277098 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:17:51.277108 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:17:51.277117 | orchestrator | 2025-05-28 17:17:51.277126 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-05-28 17:17:51.277136 | orchestrator | Wednesday 28 May 2025 17:17:14 +0000 (0:00:00.818) 0:01:55.879 ********* 2025-05-28 17:17:51.277145 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:17:51.277155 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:17:51.277164 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:17:51.277173 | orchestrator | 2025-05-28 17:17:51.277183 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-05-28 17:17:51.277192 | orchestrator | Wednesday 28 May 2025 17:17:15 +0000 (0:00:00.876) 0:01:56.756 ********* 2025-05-28 17:17:51.277201 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:17:51.277211 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:17:51.277220 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:17:51.277229 | orchestrator | 2025-05-28 17:17:51.277239 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-05-28 17:17:51.277248 | orchestrator | Wednesday 28 May 2025 17:17:16 +0000 (0:00:00.654) 0:01:57.411 ********* 2025-05-28 17:17:51.277258 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:17:51.277267 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:17:51.277282 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:17:51.277291 | orchestrator | 2025-05-28 17:17:51.277301 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-05-28 17:17:51.277310 | orchestrator | Wednesday 28 May 2025 17:17:16 +0000 (0:00:00.704) 0:01:58.116 ********* 2025-05-28 17:17:51.277320 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:17:51.277329 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:17:51.277339 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:17:51.277348 | orchestrator | 2025-05-28 17:17:51.277357 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-05-28 17:17:51.277373 | orchestrator | Wednesday 28 May 2025 17:17:17 +0000 (0:00:01.159) 0:01:59.275 ********* 2025-05-28 17:17:51.277383 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:17:51.277392 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:17:51.277402 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:17:51.277411 | orchestrator | 2025-05-28 17:17:51.277420 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-05-28 17:17:51.277430 | orchestrator | Wednesday 28 May 2025 17:17:18 +0000 (0:00:00.316) 0:01:59.592 ********* 2025-05-28 17:17:51.277440 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.277450 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.277460 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.277469 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.277539 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.277557 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.277568 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.277577 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.277594 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.277611 | orchestrator | 2025-05-28 17:17:51.277621 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-05-28 17:17:51.277631 | orchestrator | Wednesday 28 May 2025 17:17:19 +0000 (0:00:01.436) 0:02:01.028 ********* 2025-05-28 17:17:51.277641 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.277650 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.277660 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.277670 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.277679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.277689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.277703 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.277713 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.277723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.277738 | orchestrator | 2025-05-28 17:17:51.277748 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-05-28 17:17:51.277758 | orchestrator | Wednesday 28 May 2025 17:17:23 +0000 (0:00:03.754) 0:02:04.783 ********* 2025-05-28 17:17:51.277773 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.277783 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.277793 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.277802 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.277812 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.277822 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.277832 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.277849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.277860 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:17:51.277876 | orchestrator | 2025-05-28 17:17:51.277886 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-28 17:17:51.277895 | orchestrator | Wednesday 28 May 2025 17:17:26 +0000 (0:00:02.995) 0:02:07.778 ********* 2025-05-28 17:17:51.277905 | orchestrator | 2025-05-28 17:17:51.277915 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-28 17:17:51.277924 | orchestrator | Wednesday 28 May 2025 17:17:26 +0000 (0:00:00.063) 0:02:07.842 ********* 2025-05-28 17:17:51.277933 | orchestrator | 2025-05-28 17:17:51.277943 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-28 17:17:51.277952 | orchestrator | Wednesday 28 May 2025 17:17:26 +0000 (0:00:00.064) 0:02:07.906 ********* 2025-05-28 17:17:51.277961 | orchestrator | 2025-05-28 17:17:51.277971 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-05-28 17:17:51.277980 | orchestrator | Wednesday 28 May 2025 17:17:26 +0000 (0:00:00.065) 0:02:07.971 ********* 2025-05-28 17:17:51.277989 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:17:51.277999 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:17:51.278008 | orchestrator | 2025-05-28 17:17:51.278061 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-05-28 17:17:51.278074 | orchestrator | Wednesday 28 May 2025 17:17:32 +0000 (0:00:06.277) 0:02:14.249 ********* 2025-05-28 17:17:51.278083 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:17:51.278093 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:17:51.278102 | orchestrator | 2025-05-28 17:17:51.278112 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-05-28 17:17:51.278121 | orchestrator | Wednesday 28 May 2025 17:17:39 +0000 (0:00:06.160) 0:02:20.410 ********* 2025-05-28 17:17:51.278131 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:17:51.278141 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:17:51.278150 | orchestrator | 2025-05-28 17:17:51.278160 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-05-28 17:17:51.278170 | orchestrator | Wednesday 28 May 2025 17:17:45 +0000 (0:00:06.086) 0:02:26.496 ********* 2025-05-28 17:17:51.278179 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:17:51.278189 | orchestrator | 2025-05-28 17:17:51.278198 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-05-28 17:17:51.278208 | orchestrator | Wednesday 28 May 2025 17:17:45 +0000 (0:00:00.277) 0:02:26.773 ********* 2025-05-28 17:17:51.278218 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:17:51.278227 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:17:51.278237 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:17:51.278246 | orchestrator | 2025-05-28 17:17:51.278256 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-05-28 17:17:51.278265 | orchestrator | Wednesday 28 May 2025 17:17:46 +0000 (0:00:01.022) 0:02:27.796 ********* 2025-05-28 17:17:51.278275 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:17:51.278284 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:17:51.278294 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:17:51.278303 | orchestrator | 2025-05-28 17:17:51.278313 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-05-28 17:17:51.278322 | orchestrator | Wednesday 28 May 2025 17:17:47 +0000 (0:00:00.704) 0:02:28.501 ********* 2025-05-28 17:17:51.278332 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:17:51.278341 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:17:51.278351 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:17:51.278361 | orchestrator | 2025-05-28 17:17:51.278370 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-05-28 17:17:51.278380 | orchestrator | Wednesday 28 May 2025 17:17:47 +0000 (0:00:00.801) 0:02:29.302 ********* 2025-05-28 17:17:51.278389 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:17:51.278399 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:17:51.278408 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:17:51.278417 | orchestrator | 2025-05-28 17:17:51.278427 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-05-28 17:17:51.278445 | orchestrator | Wednesday 28 May 2025 17:17:48 +0000 (0:00:00.625) 0:02:29.928 ********* 2025-05-28 17:17:51.278454 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:17:51.278464 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:17:51.278473 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:17:51.278505 | orchestrator | 2025-05-28 17:17:51.278522 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-05-28 17:17:51.278539 | orchestrator | Wednesday 28 May 2025 17:17:49 +0000 (0:00:01.045) 0:02:30.973 ********* 2025-05-28 17:17:51.278554 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:17:51.278567 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:17:51.278577 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:17:51.278586 | orchestrator | 2025-05-28 17:17:51.278596 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 17:17:51.278606 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-05-28 17:17:51.278616 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-05-28 17:17:51.278625 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-05-28 17:17:51.278640 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 17:17:51.278650 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 17:17:51.278659 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 17:17:51.278669 | orchestrator | 2025-05-28 17:17:51.278678 | orchestrator | 2025-05-28 17:17:51.278688 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 17:17:51.278697 | orchestrator | Wednesday 28 May 2025 17:17:50 +0000 (0:00:00.863) 0:02:31.836 ********* 2025-05-28 17:17:51.278707 | orchestrator | =============================================================================== 2025-05-28 17:17:51.278716 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 37.68s 2025-05-28 17:17:51.278726 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 18.19s 2025-05-28 17:17:51.278735 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 13.76s 2025-05-28 17:17:51.278744 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 13.71s 2025-05-28 17:17:51.278754 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 8.61s 2025-05-28 17:17:51.278763 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.81s 2025-05-28 17:17:51.278773 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.75s 2025-05-28 17:17:51.278787 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.00s 2025-05-28 17:17:51.278797 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 2.62s 2025-05-28 17:17:51.278807 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.52s 2025-05-28 17:17:51.278816 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.12s 2025-05-28 17:17:51.278826 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.10s 2025-05-28 17:17:51.278835 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.91s 2025-05-28 17:17:51.278844 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.72s 2025-05-28 17:17:51.278854 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.71s 2025-05-28 17:17:51.278873 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.65s 2025-05-28 17:17:51.278882 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.58s 2025-05-28 17:17:51.278892 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.47s 2025-05-28 17:17:51.278901 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.44s 2025-05-28 17:17:51.278911 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.40s 2025-05-28 17:17:51.278920 | orchestrator | 2025-05-28 17:17:51 | INFO  | Task 1ea0541c-b057-47d5-b02b-8a8ffc1acf6d is in state SUCCESS 2025-05-28 17:17:51.278930 | orchestrator | 2025-05-28 17:17:51 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:17:54.324253 | orchestrator | 2025-05-28 17:17:54 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:17:54.326191 | orchestrator | 2025-05-28 17:17:54 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:17:54.326381 | orchestrator | 2025-05-28 17:17:54 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:17:57.371926 | orchestrator | 2025-05-28 17:17:57 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:17:57.373670 | orchestrator | 2025-05-28 17:17:57 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:17:57.373707 | orchestrator | 2025-05-28 17:17:57 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:18:00.429070 | orchestrator | 2025-05-28 17:18:00 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:18:00.429934 | orchestrator | 2025-05-28 17:18:00 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:18:00.430270 | orchestrator | 2025-05-28 17:18:00 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:18:03.476960 | orchestrator | 2025-05-28 17:18:03 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:18:03.477836 | orchestrator | 2025-05-28 17:18:03 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:18:03.477879 | orchestrator | 2025-05-28 17:18:03 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:18:06.513214 | orchestrator | 2025-05-28 17:18:06 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:18:06.513820 | orchestrator | 2025-05-28 17:18:06 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:18:06.513880 | orchestrator | 2025-05-28 17:18:06 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:18:09.563028 | orchestrator | 2025-05-28 17:18:09 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:18:09.563177 | orchestrator | 2025-05-28 17:18:09 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:18:09.563559 | orchestrator | 2025-05-28 17:18:09 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:18:12.617502 | orchestrator | 2025-05-28 17:18:12 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:18:12.618576 | orchestrator | 2025-05-28 17:18:12 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:18:12.618611 | orchestrator | 2025-05-28 17:18:12 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:18:15.657524 | orchestrator | 2025-05-28 17:18:15 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:18:15.658829 | orchestrator | 2025-05-28 17:18:15 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:18:15.658868 | orchestrator | 2025-05-28 17:18:15 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:18:18.708385 | orchestrator | 2025-05-28 17:18:18 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:18:18.708584 | orchestrator | 2025-05-28 17:18:18 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:18:18.708603 | orchestrator | 2025-05-28 17:18:18 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:18:21.751923 | orchestrator | 2025-05-28 17:18:21 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:18:21.752540 | orchestrator | 2025-05-28 17:18:21 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:18:21.752578 | orchestrator | 2025-05-28 17:18:21 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:18:24.795887 | orchestrator | 2025-05-28 17:18:24 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:18:24.796017 | orchestrator | 2025-05-28 17:18:24 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:18:24.796034 | orchestrator | 2025-05-28 17:18:24 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:18:27.852311 | orchestrator | 2025-05-28 17:18:27 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:18:27.852444 | orchestrator | 2025-05-28 17:18:27 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:18:27.852516 | orchestrator | 2025-05-28 17:18:27 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:18:30.910085 | orchestrator | 2025-05-28 17:18:30 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:18:30.914337 | orchestrator | 2025-05-28 17:18:30 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:18:30.914379 | orchestrator | 2025-05-28 17:18:30 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:18:33.964153 | orchestrator | 2025-05-28 17:18:33 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:18:33.964400 | orchestrator | 2025-05-28 17:18:33 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:18:33.964428 | orchestrator | 2025-05-28 17:18:33 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:18:37.017086 | orchestrator | 2025-05-28 17:18:37 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:18:37.018699 | orchestrator | 2025-05-28 17:18:37 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:18:37.019888 | orchestrator | 2025-05-28 17:18:37 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:18:40.082905 | orchestrator | 2025-05-28 17:18:40 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:18:40.084790 | orchestrator | 2025-05-28 17:18:40 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:18:40.084820 | orchestrator | 2025-05-28 17:18:40 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:18:43.131918 | orchestrator | 2025-05-28 17:18:43 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:18:43.135883 | orchestrator | 2025-05-28 17:18:43 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:18:43.135929 | orchestrator | 2025-05-28 17:18:43 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:18:46.180799 | orchestrator | 2025-05-28 17:18:46 | INFO  | Task 74b30dfb-576c-4936-9455-1074d7c20fd8 is in state STARTED 2025-05-28 17:18:46.185054 | orchestrator | 2025-05-28 17:18:46 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:18:46.185153 | orchestrator | 2025-05-28 17:18:46 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:18:46.185177 | orchestrator | 2025-05-28 17:18:46 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:18:49.241106 | orchestrator | 2025-05-28 17:18:49 | INFO  | Task 74b30dfb-576c-4936-9455-1074d7c20fd8 is in state STARTED 2025-05-28 17:18:49.241288 | orchestrator | 2025-05-28 17:18:49 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:18:49.241930 | orchestrator | 2025-05-28 17:18:49 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:18:49.241955 | orchestrator | 2025-05-28 17:18:49 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:18:52.305614 | orchestrator | 2025-05-28 17:18:52 | INFO  | Task 74b30dfb-576c-4936-9455-1074d7c20fd8 is in state STARTED 2025-05-28 17:18:52.307879 | orchestrator | 2025-05-28 17:18:52 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:18:52.310740 | orchestrator | 2025-05-28 17:18:52 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:18:52.310871 | orchestrator | 2025-05-28 17:18:52 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:18:55.354376 | orchestrator | 2025-05-28 17:18:55 | INFO  | Task 74b30dfb-576c-4936-9455-1074d7c20fd8 is in state STARTED 2025-05-28 17:18:55.354717 | orchestrator | 2025-05-28 17:18:55 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:18:55.355820 | orchestrator | 2025-05-28 17:18:55 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:18:55.355846 | orchestrator | 2025-05-28 17:18:55 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:18:58.412412 | orchestrator | 2025-05-28 17:18:58 | INFO  | Task 74b30dfb-576c-4936-9455-1074d7c20fd8 is in state STARTED 2025-05-28 17:18:58.415372 | orchestrator | 2025-05-28 17:18:58 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:18:58.415457 | orchestrator | 2025-05-28 17:18:58 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:18:58.415470 | orchestrator | 2025-05-28 17:18:58 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:19:01.444162 | orchestrator | 2025-05-28 17:19:01 | INFO  | Task 74b30dfb-576c-4936-9455-1074d7c20fd8 is in state STARTED 2025-05-28 17:19:01.444278 | orchestrator | 2025-05-28 17:19:01 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:19:01.445722 | orchestrator | 2025-05-28 17:19:01 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:19:01.445750 | orchestrator | 2025-05-28 17:19:01 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:19:04.491560 | orchestrator | 2025-05-28 17:19:04 | INFO  | Task 74b30dfb-576c-4936-9455-1074d7c20fd8 is in state SUCCESS 2025-05-28 17:19:04.492022 | orchestrator | 2025-05-28 17:19:04 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:19:04.493949 | orchestrator | 2025-05-28 17:19:04 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:19:04.493979 | orchestrator | 2025-05-28 17:19:04 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:19:07.536299 | orchestrator | 2025-05-28 17:19:07 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:19:07.537331 | orchestrator | 2025-05-28 17:19:07 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:19:07.537700 | orchestrator | 2025-05-28 17:19:07 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:19:10.583644 | orchestrator | 2025-05-28 17:19:10 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:19:10.586732 | orchestrator | 2025-05-28 17:19:10 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:19:10.586755 | orchestrator | 2025-05-28 17:19:10 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:19:13.637051 | orchestrator | 2025-05-28 17:19:13 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:19:13.637281 | orchestrator | 2025-05-28 17:19:13 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:19:13.639750 | orchestrator | 2025-05-28 17:19:13 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:19:16.693304 | orchestrator | 2025-05-28 17:19:16 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:19:16.694848 | orchestrator | 2025-05-28 17:19:16 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:19:16.694894 | orchestrator | 2025-05-28 17:19:16 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:19:19.752304 | orchestrator | 2025-05-28 17:19:19 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:19:19.753225 | orchestrator | 2025-05-28 17:19:19 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:19:19.753258 | orchestrator | 2025-05-28 17:19:19 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:19:22.795511 | orchestrator | 2025-05-28 17:19:22 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:19:22.797174 | orchestrator | 2025-05-28 17:19:22 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:19:22.797819 | orchestrator | 2025-05-28 17:19:22 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:19:25.867250 | orchestrator | 2025-05-28 17:19:25 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:19:25.869165 | orchestrator | 2025-05-28 17:19:25 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:19:25.869197 | orchestrator | 2025-05-28 17:19:25 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:19:28.911254 | orchestrator | 2025-05-28 17:19:28 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:19:28.911450 | orchestrator | 2025-05-28 17:19:28 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:19:28.911472 | orchestrator | 2025-05-28 17:19:28 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:19:31.954347 | orchestrator | 2025-05-28 17:19:31 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:19:31.955057 | orchestrator | 2025-05-28 17:19:31 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:19:31.955091 | orchestrator | 2025-05-28 17:19:31 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:19:35.009696 | orchestrator | 2025-05-28 17:19:35 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:19:35.011698 | orchestrator | 2025-05-28 17:19:35 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:19:35.011739 | orchestrator | 2025-05-28 17:19:35 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:19:38.056361 | orchestrator | 2025-05-28 17:19:38 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:19:38.059907 | orchestrator | 2025-05-28 17:19:38 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:19:38.059990 | orchestrator | 2025-05-28 17:19:38 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:19:41.106518 | orchestrator | 2025-05-28 17:19:41 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:19:41.108642 | orchestrator | 2025-05-28 17:19:41 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:19:41.108676 | orchestrator | 2025-05-28 17:19:41 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:19:44.160537 | orchestrator | 2025-05-28 17:19:44 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:19:44.161237 | orchestrator | 2025-05-28 17:19:44 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:19:44.161274 | orchestrator | 2025-05-28 17:19:44 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:19:47.214283 | orchestrator | 2025-05-28 17:19:47 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:19:47.215641 | orchestrator | 2025-05-28 17:19:47 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:19:47.215855 | orchestrator | 2025-05-28 17:19:47 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:19:50.261964 | orchestrator | 2025-05-28 17:19:50 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:19:50.265014 | orchestrator | 2025-05-28 17:19:50 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:19:50.265072 | orchestrator | 2025-05-28 17:19:50 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:19:53.315186 | orchestrator | 2025-05-28 17:19:53 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:19:53.322884 | orchestrator | 2025-05-28 17:19:53 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:19:53.322949 | orchestrator | 2025-05-28 17:19:53 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:19:56.369493 | orchestrator | 2025-05-28 17:19:56 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:19:56.372779 | orchestrator | 2025-05-28 17:19:56 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:19:56.372904 | orchestrator | 2025-05-28 17:19:56 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:19:59.422676 | orchestrator | 2025-05-28 17:19:59 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:19:59.424794 | orchestrator | 2025-05-28 17:19:59 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:19:59.424846 | orchestrator | 2025-05-28 17:19:59 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:20:02.465330 | orchestrator | 2025-05-28 17:20:02 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:20:02.466011 | orchestrator | 2025-05-28 17:20:02 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:20:02.466431 | orchestrator | 2025-05-28 17:20:02 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:20:05.511975 | orchestrator | 2025-05-28 17:20:05 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:20:05.515686 | orchestrator | 2025-05-28 17:20:05 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:20:05.515722 | orchestrator | 2025-05-28 17:20:05 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:20:08.550354 | orchestrator | 2025-05-28 17:20:08 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:20:08.551727 | orchestrator | 2025-05-28 17:20:08 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:20:08.551770 | orchestrator | 2025-05-28 17:20:08 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:20:11.607581 | orchestrator | 2025-05-28 17:20:11 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:20:11.608680 | orchestrator | 2025-05-28 17:20:11 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:20:11.608716 | orchestrator | 2025-05-28 17:20:11 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:20:14.652526 | orchestrator | 2025-05-28 17:20:14 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:20:14.654079 | orchestrator | 2025-05-28 17:20:14 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:20:14.654364 | orchestrator | 2025-05-28 17:20:14 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:20:17.700644 | orchestrator | 2025-05-28 17:20:17 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:20:17.700762 | orchestrator | 2025-05-28 17:20:17 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state STARTED 2025-05-28 17:20:17.700778 | orchestrator | 2025-05-28 17:20:17 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:20:20.748764 | orchestrator | 2025-05-28 17:20:20 | INFO  | Task 75f8a76b-6ea2-42d1-99f7-97e14c9e1a7d is in state STARTED 2025-05-28 17:20:20.748899 | orchestrator | 2025-05-28 17:20:20 | INFO  | Task 4fcc0b0c-bcde-4847-b04f-c856fbe593ed is in state STARTED 2025-05-28 17:20:20.748928 | orchestrator | 2025-05-28 17:20:20 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:20:20.757494 | orchestrator | 2025-05-28 17:20:20 | INFO  | Task 31ad9459-261b-4617-89f4-12da6da9de0a is in state SUCCESS 2025-05-28 17:20:20.759259 | orchestrator | 2025-05-28 17:20:20.759301 | orchestrator | None 2025-05-28 17:20:20.759313 | orchestrator | 2025-05-28 17:20:20.759325 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-28 17:20:20.759337 | orchestrator | 2025-05-28 17:20:20.759349 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-28 17:20:20.759388 | orchestrator | Wednesday 28 May 2025 17:14:08 +0000 (0:00:00.601) 0:00:00.601 ********* 2025-05-28 17:20:20.759404 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:20:20.759417 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:20:20.759428 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:20:20.759439 | orchestrator | 2025-05-28 17:20:20.759450 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-28 17:20:20.759461 | orchestrator | Wednesday 28 May 2025 17:14:08 +0000 (0:00:00.628) 0:00:01.230 ********* 2025-05-28 17:20:20.759473 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-05-28 17:20:20.759506 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-05-28 17:20:20.759518 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-05-28 17:20:20.759529 | orchestrator | 2025-05-28 17:20:20.759540 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-05-28 17:20:20.759552 | orchestrator | 2025-05-28 17:20:20.759702 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-05-28 17:20:20.759720 | orchestrator | Wednesday 28 May 2025 17:14:09 +0000 (0:00:00.729) 0:00:01.959 ********* 2025-05-28 17:20:20.759731 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:20:20.759743 | orchestrator | 2025-05-28 17:20:20.760294 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-05-28 17:20:20.760345 | orchestrator | Wednesday 28 May 2025 17:14:10 +0000 (0:00:00.604) 0:00:02.564 ********* 2025-05-28 17:20:20.760357 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:20:20.760397 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:20:20.760408 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:20:20.760419 | orchestrator | 2025-05-28 17:20:20.760430 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-05-28 17:20:20.760555 | orchestrator | Wednesday 28 May 2025 17:14:11 +0000 (0:00:01.028) 0:00:03.592 ********* 2025-05-28 17:20:20.760568 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:20:20.760578 | orchestrator | 2025-05-28 17:20:20.760589 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-05-28 17:20:20.760600 | orchestrator | Wednesday 28 May 2025 17:14:12 +0000 (0:00:01.112) 0:00:04.705 ********* 2025-05-28 17:20:20.760611 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:20:20.760622 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:20:20.760632 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:20:20.760643 | orchestrator | 2025-05-28 17:20:20.760653 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-05-28 17:20:20.760664 | orchestrator | Wednesday 28 May 2025 17:14:13 +0000 (0:00:01.200) 0:00:05.906 ********* 2025-05-28 17:20:20.760675 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-05-28 17:20:20.760687 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-05-28 17:20:20.760697 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-05-28 17:20:20.760708 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-05-28 17:20:20.762283 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-05-28 17:20:20.762321 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-05-28 17:20:20.762335 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-05-28 17:20:20.762347 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-05-28 17:20:20.762357 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-05-28 17:20:20.762392 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-05-28 17:20:20.762404 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-05-28 17:20:20.762414 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-05-28 17:20:20.762426 | orchestrator | 2025-05-28 17:20:20.762438 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-05-28 17:20:20.762449 | orchestrator | Wednesday 28 May 2025 17:14:19 +0000 (0:00:05.798) 0:00:11.704 ********* 2025-05-28 17:20:20.762460 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-05-28 17:20:20.762472 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-05-28 17:20:20.762483 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-05-28 17:20:20.762493 | orchestrator | 2025-05-28 17:20:20.762504 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-05-28 17:20:20.762515 | orchestrator | Wednesday 28 May 2025 17:14:20 +0000 (0:00:01.087) 0:00:12.792 ********* 2025-05-28 17:20:20.762525 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-05-28 17:20:20.762536 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-05-28 17:20:20.762547 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-05-28 17:20:20.762557 | orchestrator | 2025-05-28 17:20:20.762568 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-05-28 17:20:20.762578 | orchestrator | Wednesday 28 May 2025 17:14:21 +0000 (0:00:01.470) 0:00:14.262 ********* 2025-05-28 17:20:20.762622 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-05-28 17:20:20.762634 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.762664 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-05-28 17:20:20.762676 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.762687 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-05-28 17:20:20.762697 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.762708 | orchestrator | 2025-05-28 17:20:20.762718 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-05-28 17:20:20.762729 | orchestrator | Wednesday 28 May 2025 17:14:22 +0000 (0:00:00.875) 0:00:15.137 ********* 2025-05-28 17:20:20.762757 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-28 17:20:20.762776 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-28 17:20:20.762787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-28 17:20:20.762798 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-28 17:20:20.762811 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-28 17:20:20.762831 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-28 17:20:20.762867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-28 17:20:20.762885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-28 17:20:20.762897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-28 17:20:20.762908 | orchestrator | 2025-05-28 17:20:20.762919 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-05-28 17:20:20.762930 | orchestrator | Wednesday 28 May 2025 17:14:24 +0000 (0:00:02.036) 0:00:17.174 ********* 2025-05-28 17:20:20.762941 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:20:20.762951 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:20:20.762962 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:20:20.762972 | orchestrator | 2025-05-28 17:20:20.762983 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-05-28 17:20:20.762994 | orchestrator | Wednesday 28 May 2025 17:14:25 +0000 (0:00:00.808) 0:00:17.983 ********* 2025-05-28 17:20:20.763004 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-05-28 17:20:20.763015 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-05-28 17:20:20.763025 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-05-28 17:20:20.763036 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-05-28 17:20:20.763047 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-05-28 17:20:20.763057 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-05-28 17:20:20.763068 | orchestrator | 2025-05-28 17:20:20.763079 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-05-28 17:20:20.763089 | orchestrator | Wednesday 28 May 2025 17:14:27 +0000 (0:00:01.684) 0:00:19.668 ********* 2025-05-28 17:20:20.763100 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:20:20.763111 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:20:20.763121 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:20:20.763132 | orchestrator | 2025-05-28 17:20:20.763142 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-05-28 17:20:20.763153 | orchestrator | Wednesday 28 May 2025 17:14:29 +0000 (0:00:01.981) 0:00:21.649 ********* 2025-05-28 17:20:20.763170 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:20:20.763181 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:20:20.763192 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:20:20.763202 | orchestrator | 2025-05-28 17:20:20.763213 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-05-28 17:20:20.763224 | orchestrator | Wednesday 28 May 2025 17:14:31 +0000 (0:00:01.947) 0:00:23.597 ********* 2025-05-28 17:20:20.763235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-28 17:20:20.763256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-28 17:20:20.763273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-28 17:20:20.763286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__64e591159e93f15a1ad629703ae5216297e34e25', '__omit_place_holder__64e591159e93f15a1ad629703ae5216297e34e25'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-28 17:20:20.763297 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.763308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-28 17:20:20.763320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-28 17:20:20.763339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-28 17:20:20.763351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__64e591159e93f15a1ad629703ae5216297e34e25', '__omit_place_holder__64e591159e93f15a1ad629703ae5216297e34e25'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-28 17:20:20.763416 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.763437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-28 17:20:20.763455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-28 17:20:20.763466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-28 17:20:20.763478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__64e591159e93f15a1ad629703ae5216297e34e25', '__omit_place_holder__64e591159e93f15a1ad629703ae5216297e34e25'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-28 17:20:20.763496 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.763507 | orchestrator | 2025-05-28 17:20:20.763518 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-05-28 17:20:20.763529 | orchestrator | Wednesday 28 May 2025 17:14:31 +0000 (0:00:00.506) 0:00:24.104 ********* 2025-05-28 17:20:20.763540 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-28 17:20:20.763551 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-28 17:20:20.763619 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-28 17:20:20.763637 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-28 17:20:20.763648 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-28 17:20:20.763660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-28 17:20:20.763684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-28 17:20:20.763695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__64e591159e93f15a1ad629703ae5216297e34e25', '__omit_place_holder__64e591159e93f15a1ad629703ae5216297e34e25'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-28 17:20:20.763706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__64e591159e93f15a1ad629703ae5216297e34e25', '__omit_place_holder__64e591159e93f15a1ad629703ae5216297e34e25'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-28 17:20:20.763724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-28 17:20:20.763740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-28 17:20:20.763752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__64e591159e93f15a1ad629703ae5216297e34e25', '__omit_place_holder__64e591159e93f15a1ad629703ae5216297e34e25'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-28 17:20:20.763770 | orchestrator | 2025-05-28 17:20:20.763781 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-05-28 17:20:20.763792 | orchestrator | Wednesday 28 May 2025 17:14:34 +0000 (0:00:02.743) 0:00:26.847 ********* 2025-05-28 17:20:20.763804 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-28 17:20:20.763816 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-28 17:20:20.763827 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-28 17:20:20.763845 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-28 17:20:20.763861 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-28 17:20:20.763873 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-28 17:20:20.763897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-28 17:20:20.763909 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-28 17:20:20.763920 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-28 17:20:20.763931 | orchestrator | 2025-05-28 17:20:20.763942 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-05-28 17:20:20.763953 | orchestrator | Wednesday 28 May 2025 17:14:37 +0000 (0:00:03.521) 0:00:30.369 ********* 2025-05-28 17:20:20.763964 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-05-28 17:20:20.763975 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-05-28 17:20:20.763986 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-05-28 17:20:20.763996 | orchestrator | 2025-05-28 17:20:20.764007 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-05-28 17:20:20.764018 | orchestrator | Wednesday 28 May 2025 17:14:40 +0000 (0:00:02.616) 0:00:32.985 ********* 2025-05-28 17:20:20.764029 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-05-28 17:20:20.764040 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-05-28 17:20:20.764055 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-05-28 17:20:20.764066 | orchestrator | 2025-05-28 17:20:20.764077 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-05-28 17:20:20.764088 | orchestrator | Wednesday 28 May 2025 17:14:46 +0000 (0:00:05.605) 0:00:38.591 ********* 2025-05-28 17:20:20.764099 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.764110 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.764121 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.764131 | orchestrator | 2025-05-28 17:20:20.764142 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-05-28 17:20:20.764153 | orchestrator | Wednesday 28 May 2025 17:14:46 +0000 (0:00:00.571) 0:00:39.163 ********* 2025-05-28 17:20:20.764164 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-05-28 17:20:20.764176 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-05-28 17:20:20.764195 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-05-28 17:20:20.764206 | orchestrator | 2025-05-28 17:20:20.764217 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-05-28 17:20:20.764228 | orchestrator | Wednesday 28 May 2025 17:14:49 +0000 (0:00:02.585) 0:00:41.748 ********* 2025-05-28 17:20:20.764238 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-05-28 17:20:20.764249 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-05-28 17:20:20.764260 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-05-28 17:20:20.764271 | orchestrator | 2025-05-28 17:20:20.764282 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-05-28 17:20:20.764293 | orchestrator | Wednesday 28 May 2025 17:14:51 +0000 (0:00:02.438) 0:00:44.186 ********* 2025-05-28 17:20:20.764303 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-05-28 17:20:20.764314 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-05-28 17:20:20.764325 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-05-28 17:20:20.764336 | orchestrator | 2025-05-28 17:20:20.764347 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-05-28 17:20:20.764358 | orchestrator | Wednesday 28 May 2025 17:14:53 +0000 (0:00:01.871) 0:00:46.058 ********* 2025-05-28 17:20:20.764400 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-05-28 17:20:20.764412 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-05-28 17:20:20.764422 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-05-28 17:20:20.764433 | orchestrator | 2025-05-28 17:20:20.764444 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-05-28 17:20:20.764454 | orchestrator | Wednesday 28 May 2025 17:14:55 +0000 (0:00:01.739) 0:00:47.798 ********* 2025-05-28 17:20:20.764494 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:20:20.764505 | orchestrator | 2025-05-28 17:20:20.764516 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-05-28 17:20:20.764527 | orchestrator | Wednesday 28 May 2025 17:14:56 +0000 (0:00:00.731) 0:00:48.529 ********* 2025-05-28 17:20:20.764538 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-28 17:20:20.764550 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-28 17:20:20.764567 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-28 17:20:20.764591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-28 17:20:20.764603 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-28 17:20:20.764614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-28 17:20:20.764625 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-28 17:20:20.764636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-28 17:20:20.764648 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-28 17:20:20.764658 | orchestrator | 2025-05-28 17:20:20.764676 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-05-28 17:20:20.764687 | orchestrator | Wednesday 28 May 2025 17:14:59 +0000 (0:00:03.218) 0:00:51.748 ********* 2025-05-28 17:20:20.764706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-28 17:20:20.764739 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-28 17:20:20.764752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-28 17:20:20.764763 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.764774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-28 17:20:20.764786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-28 17:20:20.764797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-28 17:20:20.764808 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.764826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-28 17:20:20.764849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-28 17:20:20.764862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-28 17:20:20.764873 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.764884 | orchestrator | 2025-05-28 17:20:20.764895 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-05-28 17:20:20.764905 | orchestrator | Wednesday 28 May 2025 17:14:59 +0000 (0:00:00.586) 0:00:52.334 ********* 2025-05-28 17:20:20.764917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-28 17:20:20.764928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-28 17:20:20.764939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-28 17:20:20.764950 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.764967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-28 17:20:20.764984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-28 17:20:20.765000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-28 17:20:20.765012 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.765023 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-28 17:20:20.765034 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-28 17:20:20.765045 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-28 17:20:20.765056 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.765067 | orchestrator | 2025-05-28 17:20:20.765078 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-05-28 17:20:20.765089 | orchestrator | Wednesday 28 May 2025 17:15:01 +0000 (0:00:01.179) 0:00:53.513 ********* 2025-05-28 17:20:20.765106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-28 17:20:20.765122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-28 17:20:20.765134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-28 17:20:20.765145 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.765160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-28 17:20:20.765172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-28 17:20:20.765183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-28 17:20:20.765194 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.765205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-28 17:20:20.765223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-28 17:20:20.765255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-28 17:20:20.765267 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.765278 | orchestrator | 2025-05-28 17:20:20.765289 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-05-28 17:20:20.765299 | orchestrator | Wednesday 28 May 2025 17:15:02 +0000 (0:00:00.978) 0:00:54.491 ********* 2025-05-28 17:20:20.765331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-28 17:20:20.765344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-28 17:20:20.765355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-28 17:20:20.765384 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.765396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-28 17:20:20.765419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-28 17:20:20.765430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-28 17:20:20.765442 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.765460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-28 17:20:20.765477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-28 17:20:20.765488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-28 17:20:20.765499 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.765510 | orchestrator | 2025-05-28 17:20:20.765521 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-05-28 17:20:20.765532 | orchestrator | Wednesday 28 May 2025 17:15:02 +0000 (0:00:00.673) 0:00:55.165 ********* 2025-05-28 17:20:20.765543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-28 17:20:20.765561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-28 17:20:20.765572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-28 17:20:20.765583 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.765600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-28 17:20:20.765616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-28 17:20:20.765628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-28 17:20:20.765639 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.765650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-28 17:20:20.765671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-28 17:20:20.765683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-28 17:20:20.765694 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.765705 | orchestrator | 2025-05-28 17:20:20.765716 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-05-28 17:20:20.765727 | orchestrator | Wednesday 28 May 2025 17:15:03 +0000 (0:00:01.198) 0:00:56.364 ********* 2025-05-28 17:20:20.765738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-28 17:20:20.765755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-28 17:20:20.765772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-28 17:20:20.765783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-28 17:20:20.765801 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.765812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-28 17:20:20.765823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-28 17:20:20.765834 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.765846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-28 17:20:20.765862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-28 17:20:20.765874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-28 17:20:20.765885 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.765895 | orchestrator | 2025-05-28 17:20:20.765911 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-05-28 17:20:20.765923 | orchestrator | Wednesday 28 May 2025 17:15:04 +0000 (0:00:00.746) 0:00:57.110 ********* 2025-05-28 17:20:20.765934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-28 17:20:20.765953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-28 17:20:20.765964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-28 17:20:20.765975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-28 17:20:20.765986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-28 17:20:20.765997 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.766049 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-28 17:20:20.766063 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.766080 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-28 17:20:20.766099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-28 17:20:20.766110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-28 17:20:20.766121 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.766132 | orchestrator | 2025-05-28 17:20:20.766143 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-05-28 17:20:20.766154 | orchestrator | Wednesday 28 May 2025 17:15:05 +0000 (0:00:00.872) 0:00:57.983 ********* 2025-05-28 17:20:20.766165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-28 17:20:20.766177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-28 17:20:20.766188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-28 17:20:20.766199 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.766238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-28 17:20:20.766258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-28 17:20:20.766270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-28 17:20:20.766281 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.766292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-28 17:20:20.766303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-28 17:20:20.766314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-28 17:20:20.766325 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.766336 | orchestrator | 2025-05-28 17:20:20.766347 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-05-28 17:20:20.766358 | orchestrator | Wednesday 28 May 2025 17:15:06 +0000 (0:00:01.401) 0:00:59.384 ********* 2025-05-28 17:20:20.766386 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-05-28 17:20:20.766398 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-05-28 17:20:20.766415 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-05-28 17:20:20.766426 | orchestrator | 2025-05-28 17:20:20.766437 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-05-28 17:20:20.766467 | orchestrator | Wednesday 28 May 2025 17:15:08 +0000 (0:00:01.259) 0:01:00.643 ********* 2025-05-28 17:20:20.766478 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-05-28 17:20:20.766489 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-05-28 17:20:20.766500 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-05-28 17:20:20.766510 | orchestrator | 2025-05-28 17:20:20.766526 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-05-28 17:20:20.766537 | orchestrator | Wednesday 28 May 2025 17:15:09 +0000 (0:00:01.287) 0:01:01.931 ********* 2025-05-28 17:20:20.766548 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-05-28 17:20:20.766558 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-05-28 17:20:20.766569 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-28 17:20:20.766580 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.766591 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-05-28 17:20:20.766601 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-28 17:20:20.766612 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.766623 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-28 17:20:20.766634 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.766645 | orchestrator | 2025-05-28 17:20:20.766655 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-05-28 17:20:20.766666 | orchestrator | Wednesday 28 May 2025 17:15:10 +0000 (0:00:00.978) 0:01:02.910 ********* 2025-05-28 17:20:20.766677 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-28 17:20:20.766689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-28 17:20:20.766700 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-28 17:20:20.766727 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-28 17:20:20.766744 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-28 17:20:20.766755 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-28 17:20:20.766767 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-28 17:20:20.766778 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-28 17:20:20.766790 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-28 17:20:20.766801 | orchestrator | 2025-05-28 17:20:20.766811 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-05-28 17:20:20.766822 | orchestrator | Wednesday 28 May 2025 17:15:13 +0000 (0:00:02.654) 0:01:05.564 ********* 2025-05-28 17:20:20.766833 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:20:20.766844 | orchestrator | 2025-05-28 17:20:20.766855 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-05-28 17:20:20.766872 | orchestrator | Wednesday 28 May 2025 17:15:14 +0000 (0:00:00.927) 0:01:06.492 ********* 2025-05-28 17:20:20.766890 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-05-28 17:20:20.766902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-28 17:20:20.766920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.766931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.766943 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-05-28 17:20:20.766954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-28 17:20:20.766972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.766989 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.767006 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-05-28 17:20:20.767018 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-28 17:20:20.767029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.767040 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.767057 | orchestrator | 2025-05-28 17:20:20.767068 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-05-28 17:20:20.767079 | orchestrator | Wednesday 28 May 2025 17:15:19 +0000 (0:00:05.079) 0:01:11.572 ********* 2025-05-28 17:20:20.767090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-05-28 17:20:20.767108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-28 17:20:20.767124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.767136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.767146 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.767158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-05-28 17:20:20.767169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-28 17:20:20.767187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.767198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.767209 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.767226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-05-28 17:20:20.767237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-28 17:20:20.767249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.767260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.767278 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.767289 | orchestrator | 2025-05-28 17:20:20.767300 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-05-28 17:20:20.767310 | orchestrator | Wednesday 28 May 2025 17:15:20 +0000 (0:00:01.251) 0:01:12.823 ********* 2025-05-28 17:20:20.767322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-05-28 17:20:20.767334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-05-28 17:20:20.767345 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.767356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-05-28 17:20:20.767396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-05-28 17:20:20.767407 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.767418 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-05-28 17:20:20.767429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-05-28 17:20:20.767440 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.767451 | orchestrator | 2025-05-28 17:20:20.767467 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-05-28 17:20:20.767478 | orchestrator | Wednesday 28 May 2025 17:15:21 +0000 (0:00:01.592) 0:01:14.416 ********* 2025-05-28 17:20:20.767489 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:20:20.767500 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:20:20.767511 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:20:20.767522 | orchestrator | 2025-05-28 17:20:20.767532 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-05-28 17:20:20.767543 | orchestrator | Wednesday 28 May 2025 17:15:23 +0000 (0:00:01.465) 0:01:15.881 ********* 2025-05-28 17:20:20.767554 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:20:20.767564 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:20:20.767609 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:20:20.767622 | orchestrator | 2025-05-28 17:20:20.767650 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-05-28 17:20:20.767662 | orchestrator | Wednesday 28 May 2025 17:15:26 +0000 (0:00:02.876) 0:01:18.757 ********* 2025-05-28 17:20:20.767672 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:20:20.767683 | orchestrator | 2025-05-28 17:20:20.767694 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-05-28 17:20:20.767704 | orchestrator | Wednesday 28 May 2025 17:15:27 +0000 (0:00:00.807) 0:01:19.565 ********* 2025-05-28 17:20:20.767716 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-28 17:20:20.767735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.767748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.767759 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-28 17:20:20.767777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.767793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.767811 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-28 17:20:20.767823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.767834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.767845 | orchestrator | 2025-05-28 17:20:20.767856 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-05-28 17:20:20.767867 | orchestrator | Wednesday 28 May 2025 17:15:33 +0000 (0:00:06.359) 0:01:25.925 ********* 2025-05-28 17:20:20.767886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-28 17:20:20.767907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-28 17:20:20.767925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.767937 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.767948 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.767959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.767970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.767981 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.768003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-28 17:20:20.768015 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.768034 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.768045 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.768056 | orchestrator | 2025-05-28 17:20:20.768067 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-05-28 17:20:20.768078 | orchestrator | Wednesday 28 May 2025 17:15:34 +0000 (0:00:00.665) 0:01:26.590 ********* 2025-05-28 17:20:20.768089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-28 17:20:20.768102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-28 17:20:20.768113 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.768124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-28 17:20:20.768134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-28 17:20:20.768145 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.768156 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-28 17:20:20.768167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-28 17:20:20.768178 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.768189 | orchestrator | 2025-05-28 17:20:20.768200 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-05-28 17:20:20.768210 | orchestrator | Wednesday 28 May 2025 17:15:34 +0000 (0:00:00.830) 0:01:27.421 ********* 2025-05-28 17:20:20.768221 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:20:20.768232 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:20:20.768243 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:20:20.768253 | orchestrator | 2025-05-28 17:20:20.768264 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-05-28 17:20:20.768275 | orchestrator | Wednesday 28 May 2025 17:15:37 +0000 (0:00:02.436) 0:01:29.858 ********* 2025-05-28 17:20:20.768285 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:20:20.768296 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:20:20.768307 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:20:20.768325 | orchestrator | 2025-05-28 17:20:20.768342 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-05-28 17:20:20.768353 | orchestrator | Wednesday 28 May 2025 17:15:39 +0000 (0:00:01.882) 0:01:31.741 ********* 2025-05-28 17:20:20.768413 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.768426 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.768436 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.768447 | orchestrator | 2025-05-28 17:20:20.768458 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-05-28 17:20:20.768468 | orchestrator | Wednesday 28 May 2025 17:15:39 +0000 (0:00:00.250) 0:01:31.991 ********* 2025-05-28 17:20:20.768479 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:20:20.768490 | orchestrator | 2025-05-28 17:20:20.768506 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-05-28 17:20:20.768517 | orchestrator | Wednesday 28 May 2025 17:15:40 +0000 (0:00:00.607) 0:01:32.599 ********* 2025-05-28 17:20:20.768529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-05-28 17:20:20.768541 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-05-28 17:20:20.768553 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-05-28 17:20:20.768565 | orchestrator | 2025-05-28 17:20:20.768576 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-05-28 17:20:20.768586 | orchestrator | Wednesday 28 May 2025 17:15:44 +0000 (0:00:04.132) 0:01:36.731 ********* 2025-05-28 17:20:20.768604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-05-28 17:20:20.768638 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.768655 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-05-28 17:20:20.768666 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.768677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-05-28 17:20:20.768688 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.768699 | orchestrator | 2025-05-28 17:20:20.768709 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-05-28 17:20:20.768720 | orchestrator | Wednesday 28 May 2025 17:15:45 +0000 (0:00:01.602) 0:01:38.334 ********* 2025-05-28 17:20:20.768732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-28 17:20:20.768746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-28 17:20:20.768757 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.768767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-28 17:20:20.768783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-28 17:20:20.768793 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.768808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-28 17:20:20.768819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-28 17:20:20.768828 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.768838 | orchestrator | 2025-05-28 17:20:20.768852 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-05-28 17:20:20.768862 | orchestrator | Wednesday 28 May 2025 17:15:47 +0000 (0:00:01.783) 0:01:40.117 ********* 2025-05-28 17:20:20.768872 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.768881 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.768891 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.768900 | orchestrator | 2025-05-28 17:20:20.768910 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-05-28 17:20:20.768919 | orchestrator | Wednesday 28 May 2025 17:15:48 +0000 (0:00:00.656) 0:01:40.773 ********* 2025-05-28 17:20:20.768929 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.768938 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.768948 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.768957 | orchestrator | 2025-05-28 17:20:20.768966 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-05-28 17:20:20.768976 | orchestrator | Wednesday 28 May 2025 17:15:49 +0000 (0:00:01.119) 0:01:41.893 ********* 2025-05-28 17:20:20.768986 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:20:20.768995 | orchestrator | 2025-05-28 17:20:20.769005 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-05-28 17:20:20.769014 | orchestrator | Wednesday 28 May 2025 17:15:50 +0000 (0:00:00.795) 0:01:42.688 ********* 2025-05-28 17:20:20.769024 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-28 17:20:20.769043 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.769053 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.769069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.769084 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-28 17:20:20.769095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.769105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.769121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.769136 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-28 17:20:20.769152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.769162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.769172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.769188 | orchestrator | 2025-05-28 17:20:20.769198 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-05-28 17:20:20.769207 | orchestrator | Wednesday 28 May 2025 17:15:53 +0000 (0:00:03.086) 0:01:45.775 ********* 2025-05-28 17:20:20.769217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-28 17:20:20.769227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.769250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.769260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.769270 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.769280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-28 17:20:20.769295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.769305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.769321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.769331 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.769345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-28 17:20:20.769355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.769389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.769400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.769410 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.769420 | orchestrator | 2025-05-28 17:20:20.769429 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-05-28 17:20:20.769439 | orchestrator | Wednesday 28 May 2025 17:15:54 +0000 (0:00:00.867) 0:01:46.643 ********* 2025-05-28 17:20:20.769450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-28 17:20:20.769464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-28 17:20:20.769475 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.769485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-28 17:20:20.769499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-28 17:20:20.769509 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.769519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-28 17:20:20.769529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-28 17:20:20.769539 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.769548 | orchestrator | 2025-05-28 17:20:20.769558 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-05-28 17:20:20.769574 | orchestrator | Wednesday 28 May 2025 17:15:55 +0000 (0:00:00.880) 0:01:47.524 ********* 2025-05-28 17:20:20.769584 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:20:20.769593 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:20:20.769603 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:20:20.769612 | orchestrator | 2025-05-28 17:20:20.769622 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-05-28 17:20:20.769632 | orchestrator | Wednesday 28 May 2025 17:15:56 +0000 (0:00:01.266) 0:01:48.791 ********* 2025-05-28 17:20:20.769641 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:20:20.769650 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:20:20.769660 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:20:20.769669 | orchestrator | 2025-05-28 17:20:20.769679 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-05-28 17:20:20.769688 | orchestrator | Wednesday 28 May 2025 17:15:58 +0000 (0:00:02.063) 0:01:50.854 ********* 2025-05-28 17:20:20.769698 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.769707 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.769716 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.769726 | orchestrator | 2025-05-28 17:20:20.769735 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-05-28 17:20:20.769745 | orchestrator | Wednesday 28 May 2025 17:15:58 +0000 (0:00:00.581) 0:01:51.436 ********* 2025-05-28 17:20:20.769755 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.769764 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.769773 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.769783 | orchestrator | 2025-05-28 17:20:20.769793 | orchestrator | TASK [include_role : designate] ************************************************ 2025-05-28 17:20:20.769802 | orchestrator | Wednesday 28 May 2025 17:15:59 +0000 (0:00:00.311) 0:01:51.747 ********* 2025-05-28 17:20:20.769812 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:20:20.769821 | orchestrator | 2025-05-28 17:20:20.769831 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-05-28 17:20:20.769840 | orchestrator | Wednesday 28 May 2025 17:16:00 +0000 (0:00:00.855) 0:01:52.603 ********* 2025-05-28 17:20:20.769850 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-28 17:20:20.769864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-28 17:20:20.769879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.769896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.769907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.769917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.769927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.769937 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-28 17:20:20.769952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-28 17:20:20.769972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.769983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.769993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.770003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.770013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.770056 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-28 17:20:20.770078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-28 17:20:20.770089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.770099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.770109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.770119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.770129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.770138 | orchestrator | 2025-05-28 17:20:20.770148 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-05-28 17:20:20.770164 | orchestrator | Wednesday 28 May 2025 17:16:04 +0000 (0:00:04.781) 0:01:57.384 ********* 2025-05-28 17:20:20.770192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-28 17:20:20.770203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-28 17:20:20.770213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.770223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.770233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.770243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-28 17:20:20.770267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.770286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.770296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-28 17:20:20.770306 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.770316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.770326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.770336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.770357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-28 17:20:20.770388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.770398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-28 17:20:20.770408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.770418 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.770428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.770438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.770470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.770486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.770500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.770511 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.770521 | orchestrator | 2025-05-28 17:20:20.770531 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-05-28 17:20:20.770540 | orchestrator | Wednesday 28 May 2025 17:16:05 +0000 (0:00:00.801) 0:01:58.186 ********* 2025-05-28 17:20:20.770551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-05-28 17:20:20.770560 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-05-28 17:20:20.770571 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.770581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-05-28 17:20:20.770591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-05-28 17:20:20.770600 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.770610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-05-28 17:20:20.770619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-05-28 17:20:20.770629 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.770638 | orchestrator | 2025-05-28 17:20:20.770648 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-05-28 17:20:20.770664 | orchestrator | Wednesday 28 May 2025 17:16:06 +0000 (0:00:00.946) 0:01:59.132 ********* 2025-05-28 17:20:20.770673 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:20:20.770683 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:20:20.770693 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:20:20.770702 | orchestrator | 2025-05-28 17:20:20.770712 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-05-28 17:20:20.770721 | orchestrator | Wednesday 28 May 2025 17:16:08 +0000 (0:00:01.680) 0:02:00.812 ********* 2025-05-28 17:20:20.770731 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:20:20.770740 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:20:20.770749 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:20:20.770759 | orchestrator | 2025-05-28 17:20:20.770769 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-05-28 17:20:20.770778 | orchestrator | Wednesday 28 May 2025 17:16:10 +0000 (0:00:01.919) 0:02:02.732 ********* 2025-05-28 17:20:20.770788 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.770797 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.770807 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.770816 | orchestrator | 2025-05-28 17:20:20.770826 | orchestrator | TASK [include_role : glance] *************************************************** 2025-05-28 17:20:20.770835 | orchestrator | Wednesday 28 May 2025 17:16:10 +0000 (0:00:00.283) 0:02:03.015 ********* 2025-05-28 17:20:20.770845 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:20:20.770854 | orchestrator | 2025-05-28 17:20:20.770864 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-05-28 17:20:20.770873 | orchestrator | Wednesday 28 May 2025 17:16:11 +0000 (0:00:00.793) 0:02:03.808 ********* 2025-05-28 17:20:20.770897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-28 17:20:20.770910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-28 17:20:20.770935 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-28 17:20:20.770947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-28 17:20:20.770985 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-28 17:20:20.771002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-28 17:20:20.771019 | orchestrator | 2025-05-28 17:20:20.771029 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-05-28 17:20:20.771039 | orchestrator | Wednesday 28 May 2025 17:16:15 +0000 (0:00:04.071) 0:02:07.880 ********* 2025-05-28 17:20:20.771055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-28 17:20:20.771071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-28 17:20:20.771087 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.771098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-28 17:20:20.771120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-28 17:20:20.771131 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.771163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-28 17:20:20.771186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-28 17:20:20.771197 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.771206 | orchestrator | 2025-05-28 17:20:20.771216 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-05-28 17:20:20.771226 | orchestrator | Wednesday 28 May 2025 17:16:18 +0000 (0:00:02.810) 0:02:10.690 ********* 2025-05-28 17:20:20.771236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-28 17:20:20.771252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-28 17:20:20.771262 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.771272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-28 17:20:20.771283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-28 17:20:20.771293 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.771303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-28 17:20:20.771318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-28 17:20:20.771329 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.771338 | orchestrator | 2025-05-28 17:20:20.771348 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-05-28 17:20:20.771358 | orchestrator | Wednesday 28 May 2025 17:16:21 +0000 (0:00:03.351) 0:02:14.042 ********* 2025-05-28 17:20:20.771415 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:20:20.771431 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:20:20.771440 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:20:20.771450 | orchestrator | 2025-05-28 17:20:20.771459 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-05-28 17:20:20.771469 | orchestrator | Wednesday 28 May 2025 17:16:23 +0000 (0:00:01.559) 0:02:15.601 ********* 2025-05-28 17:20:20.771485 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:20:20.771494 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:20:20.771504 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:20:20.771513 | orchestrator | 2025-05-28 17:20:20.771523 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-05-28 17:20:20.771533 | orchestrator | Wednesday 28 May 2025 17:16:25 +0000 (0:00:01.869) 0:02:17.471 ********* 2025-05-28 17:20:20.771542 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.771551 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.771561 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.771570 | orchestrator | 2025-05-28 17:20:20.771580 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-05-28 17:20:20.771590 | orchestrator | Wednesday 28 May 2025 17:16:25 +0000 (0:00:00.303) 0:02:17.774 ********* 2025-05-28 17:20:20.771599 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:20:20.771609 | orchestrator | 2025-05-28 17:20:20.771618 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-05-28 17:20:20.771628 | orchestrator | Wednesday 28 May 2025 17:16:26 +0000 (0:00:00.820) 0:02:18.595 ********* 2025-05-28 17:20:20.771638 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-28 17:20:20.771649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-28 17:20:20.771659 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-28 17:20:20.771669 | orchestrator | 2025-05-28 17:20:20.771679 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-05-28 17:20:20.771688 | orchestrator | Wednesday 28 May 2025 17:16:29 +0000 (0:00:03.014) 0:02:21.609 ********* 2025-05-28 17:20:20.771705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-28 17:20:20.771724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-28 17:20:20.771735 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.771744 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.771754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-28 17:20:20.771764 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.771773 | orchestrator | 2025-05-28 17:20:20.771783 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-05-28 17:20:20.771792 | orchestrator | Wednesday 28 May 2025 17:16:29 +0000 (0:00:00.368) 0:02:21.977 ********* 2025-05-28 17:20:20.771800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-05-28 17:20:20.771808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-05-28 17:20:20.771816 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.771824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-05-28 17:20:20.771832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-05-28 17:20:20.771840 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.771848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-05-28 17:20:20.771856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-05-28 17:20:20.771864 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.771871 | orchestrator | 2025-05-28 17:20:20.771879 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-05-28 17:20:20.771887 | orchestrator | Wednesday 28 May 2025 17:16:30 +0000 (0:00:00.635) 0:02:22.613 ********* 2025-05-28 17:20:20.771894 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:20:20.771902 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:20:20.771910 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:20:20.771922 | orchestrator | 2025-05-28 17:20:20.771930 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-05-28 17:20:20.771938 | orchestrator | Wednesday 28 May 2025 17:16:31 +0000 (0:00:01.484) 0:02:24.097 ********* 2025-05-28 17:20:20.771946 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:20:20.771953 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:20:20.771961 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:20:20.771969 | orchestrator | 2025-05-28 17:20:20.771977 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-05-28 17:20:20.771984 | orchestrator | Wednesday 28 May 2025 17:16:33 +0000 (0:00:01.930) 0:02:26.028 ********* 2025-05-28 17:20:20.771992 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.772000 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.772012 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.772020 | orchestrator | 2025-05-28 17:20:20.772028 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-05-28 17:20:20.772035 | orchestrator | Wednesday 28 May 2025 17:16:33 +0000 (0:00:00.302) 0:02:26.331 ********* 2025-05-28 17:20:20.772043 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:20:20.772051 | orchestrator | 2025-05-28 17:20:20.772059 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-05-28 17:20:20.772067 | orchestrator | Wednesday 28 May 2025 17:16:34 +0000 (0:00:00.873) 0:02:27.205 ********* 2025-05-28 17:20:20.772079 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-28 17:20:20.772098 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-28 17:20:20.772112 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-28 17:20:20.772121 | orchestrator | 2025-05-28 17:20:20.772129 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-05-28 17:20:20.772155 | orchestrator | Wednesday 28 May 2025 17:16:39 +0000 (0:00:04.365) 0:02:31.570 ********* 2025-05-28 17:20:20.772176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-28 17:20:20.772185 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.772194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-28 17:20:20.772207 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.772236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-28 17:20:20.772247 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.772255 | orchestrator | 2025-05-28 17:20:20.772262 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-05-28 17:20:20.772270 | orchestrator | Wednesday 28 May 2025 17:16:39 +0000 (0:00:00.846) 0:02:32.417 ********* 2025-05-28 17:20:20.772278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-28 17:20:20.772288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-28 17:20:20.772297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-28 17:20:20.772314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-28 17:20:20.772322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-28 17:20:20.772331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-28 17:20:20.772339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-28 17:20:20.772347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-05-28 17:20:20.772550 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.772567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-28 17:20:20.772575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-05-28 17:20:20.772583 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.772597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-28 17:20:20.772606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-28 17:20:20.772614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-28 17:20:20.772622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-28 17:20:20.772630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-05-28 17:20:20.772638 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.772653 | orchestrator | 2025-05-28 17:20:20.772661 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-05-28 17:20:20.772669 | orchestrator | Wednesday 28 May 2025 17:16:41 +0000 (0:00:01.281) 0:02:33.698 ********* 2025-05-28 17:20:20.772677 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:20:20.772684 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:20:20.772692 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:20:20.772700 | orchestrator | 2025-05-28 17:20:20.772707 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-05-28 17:20:20.772715 | orchestrator | Wednesday 28 May 2025 17:16:43 +0000 (0:00:01.944) 0:02:35.643 ********* 2025-05-28 17:20:20.772723 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:20:20.772730 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:20:20.772738 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:20:20.772746 | orchestrator | 2025-05-28 17:20:20.772753 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-05-28 17:20:20.772761 | orchestrator | Wednesday 28 May 2025 17:16:45 +0000 (0:00:02.096) 0:02:37.739 ********* 2025-05-28 17:20:20.772769 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.772776 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.772784 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.772791 | orchestrator | 2025-05-28 17:20:20.772799 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-05-28 17:20:20.772807 | orchestrator | Wednesday 28 May 2025 17:16:45 +0000 (0:00:00.337) 0:02:38.076 ********* 2025-05-28 17:20:20.772815 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.772822 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.772830 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.772838 | orchestrator | 2025-05-28 17:20:20.772845 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-05-28 17:20:20.772853 | orchestrator | Wednesday 28 May 2025 17:16:46 +0000 (0:00:00.399) 0:02:38.475 ********* 2025-05-28 17:20:20.772860 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:20:20.772868 | orchestrator | 2025-05-28 17:20:20.772876 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-05-28 17:20:20.772884 | orchestrator | Wednesday 28 May 2025 17:16:47 +0000 (0:00:01.254) 0:02:39.730 ********* 2025-05-28 17:20:20.772898 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-28 17:20:20.772926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-28 17:20:20.772942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-28 17:20:20.772951 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-28 17:20:20.772960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-28 17:20:20.772968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-28 17:20:20.772985 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-28 17:20:20.772994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-28 17:20:20.773018 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-28 17:20:20.773027 | orchestrator | 2025-05-28 17:20:20.773035 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-05-28 17:20:20.773043 | orchestrator | Wednesday 28 May 2025 17:16:52 +0000 (0:00:04.771) 0:02:44.502 ********* 2025-05-28 17:20:20.773052 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-05-28 17:20:20.773060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-28 17:20:20.773072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-28 17:20:20.773081 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.773093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-05-28 17:20:20.773117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-28 17:20:20.773125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-28 17:20:20.773135 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.773144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-05-28 17:20:20.773158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-28 17:20:20.773185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-28 17:20:20.773199 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.773208 | orchestrator | 2025-05-28 17:20:20.773218 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-05-28 17:20:20.773227 | orchestrator | Wednesday 28 May 2025 17:16:52 +0000 (0:00:00.634) 0:02:45.136 ********* 2025-05-28 17:20:20.773236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-05-28 17:20:20.773247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-05-28 17:20:20.773256 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.773264 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-05-28 17:20:20.773274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-05-28 17:20:20.773283 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.773292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-05-28 17:20:20.773301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-05-28 17:20:20.773310 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.773320 | orchestrator | 2025-05-28 17:20:20.773329 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-05-28 17:20:20.773338 | orchestrator | Wednesday 28 May 2025 17:16:53 +0000 (0:00:01.131) 0:02:46.268 ********* 2025-05-28 17:20:20.773346 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:20:20.773355 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:20:20.773379 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:20:20.773388 | orchestrator | 2025-05-28 17:20:20.773397 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-05-28 17:20:20.773406 | orchestrator | Wednesday 28 May 2025 17:16:55 +0000 (0:00:01.242) 0:02:47.511 ********* 2025-05-28 17:20:20.773415 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:20:20.773424 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:20:20.773432 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:20:20.773441 | orchestrator | 2025-05-28 17:20:20.773449 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-05-28 17:20:20.773458 | orchestrator | Wednesday 28 May 2025 17:16:56 +0000 (0:00:01.902) 0:02:49.414 ********* 2025-05-28 17:20:20.773467 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.773476 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.773485 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.773492 | orchestrator | 2025-05-28 17:20:20.773500 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-05-28 17:20:20.773513 | orchestrator | Wednesday 28 May 2025 17:16:57 +0000 (0:00:00.301) 0:02:49.715 ********* 2025-05-28 17:20:20.773521 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:20:20.773528 | orchestrator | 2025-05-28 17:20:20.773536 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-05-28 17:20:20.773544 | orchestrator | Wednesday 28 May 2025 17:16:58 +0000 (0:00:01.191) 0:02:50.907 ********* 2025-05-28 17:20:20.773561 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-28 17:20:20.773571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.773580 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-28 17:20:20.773588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.773600 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-28 17:20:20.773631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.773640 | orchestrator | 2025-05-28 17:20:20.773648 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-05-28 17:20:20.773656 | orchestrator | Wednesday 28 May 2025 17:17:01 +0000 (0:00:03.119) 0:02:54.026 ********* 2025-05-28 17:20:20.773664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-28 17:20:20.773673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.773681 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.773690 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-28 17:20:20.773710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.773718 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.773730 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-28 17:20:20.773739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.773747 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.773755 | orchestrator | 2025-05-28 17:20:20.773763 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-05-28 17:20:20.773770 | orchestrator | Wednesday 28 May 2025 17:17:02 +0000 (0:00:00.648) 0:02:54.675 ********* 2025-05-28 17:20:20.773779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-05-28 17:20:20.773787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-05-28 17:20:20.773795 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.773803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-05-28 17:20:20.773811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-05-28 17:20:20.773834 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.773842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-05-28 17:20:20.773850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-05-28 17:20:20.773858 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.773866 | orchestrator | 2025-05-28 17:20:20.773874 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-05-28 17:20:20.773882 | orchestrator | Wednesday 28 May 2025 17:17:03 +0000 (0:00:01.389) 0:02:56.064 ********* 2025-05-28 17:20:20.773889 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:20:20.773897 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:20:20.773905 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:20:20.773912 | orchestrator | 2025-05-28 17:20:20.773920 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-05-28 17:20:20.773928 | orchestrator | Wednesday 28 May 2025 17:17:04 +0000 (0:00:01.218) 0:02:57.283 ********* 2025-05-28 17:20:20.773936 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:20:20.773943 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:20:20.773951 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:20:20.773959 | orchestrator | 2025-05-28 17:20:20.773966 | orchestrator | TASK [include_role : manila] *************************************************** 2025-05-28 17:20:20.773974 | orchestrator | Wednesday 28 May 2025 17:17:06 +0000 (0:00:01.929) 0:02:59.212 ********* 2025-05-28 17:20:20.773986 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:20:20.773993 | orchestrator | 2025-05-28 17:20:20.774002 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-05-28 17:20:20.774009 | orchestrator | Wednesday 28 May 2025 17:17:07 +0000 (0:00:00.992) 0:03:00.204 ********* 2025-05-28 17:20:20.774057 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-05-28 17:20:20.774068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.774077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.774091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.774099 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-05-28 17:20:20.774113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.774126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.774134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.774142 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-05-28 17:20:20.774155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.774163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.774181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.774189 | orchestrator | 2025-05-28 17:20:20.774197 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-05-28 17:20:20.774206 | orchestrator | Wednesday 28 May 2025 17:17:11 +0000 (0:00:03.587) 0:03:03.792 ********* 2025-05-28 17:20:20.774236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-05-28 17:20:20.774245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.774261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.774269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.774277 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.774285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-05-28 17:20:20.774299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.774311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.774319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.774332 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.774340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-05-28 17:20:20.774349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.774357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.774414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.774423 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.774431 | orchestrator | 2025-05-28 17:20:20.774439 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-05-28 17:20:20.774447 | orchestrator | Wednesday 28 May 2025 17:17:12 +0000 (0:00:00.713) 0:03:04.506 ********* 2025-05-28 17:20:20.774455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-05-28 17:20:20.774468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-05-28 17:20:20.774476 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.774484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-05-28 17:20:20.774492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-05-28 17:20:20.774505 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.774513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-05-28 17:20:20.774521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-05-28 17:20:20.774529 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.774536 | orchestrator | 2025-05-28 17:20:20.774544 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-05-28 17:20:20.774552 | orchestrator | Wednesday 28 May 2025 17:17:12 +0000 (0:00:00.863) 0:03:05.369 ********* 2025-05-28 17:20:20.774560 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:20:20.774568 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:20:20.774575 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:20:20.774583 | orchestrator | 2025-05-28 17:20:20.774591 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-05-28 17:20:20.774599 | orchestrator | Wednesday 28 May 2025 17:17:14 +0000 (0:00:01.596) 0:03:06.966 ********* 2025-05-28 17:20:20.774607 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:20:20.774615 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:20:20.774622 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:20:20.774630 | orchestrator | 2025-05-28 17:20:20.774638 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-05-28 17:20:20.774646 | orchestrator | Wednesday 28 May 2025 17:17:16 +0000 (0:00:02.141) 0:03:09.107 ********* 2025-05-28 17:20:20.774653 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:20:20.774661 | orchestrator | 2025-05-28 17:20:20.774669 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-05-28 17:20:20.774677 | orchestrator | Wednesday 28 May 2025 17:17:17 +0000 (0:00:01.158) 0:03:10.265 ********* 2025-05-28 17:20:20.774685 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-28 17:20:20.774693 | orchestrator | 2025-05-28 17:20:20.774701 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-05-28 17:20:20.774709 | orchestrator | Wednesday 28 May 2025 17:17:20 +0000 (0:00:03.120) 0:03:13.385 ********* 2025-05-28 17:20:20.774722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-28 17:20:20.774751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-28 17:20:20.774759 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.774793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-28 17:20:20.774803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-28 17:20:20.774820 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.774838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-28 17:20:20.774851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-28 17:20:20.774859 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.774865 | orchestrator | 2025-05-28 17:20:20.774872 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-05-28 17:20:20.774879 | orchestrator | Wednesday 28 May 2025 17:17:23 +0000 (0:00:02.496) 0:03:15.882 ********* 2025-05-28 17:20:20.774886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-28 17:20:20.774902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-28 17:20:20.774909 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.774919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-28 17:20:20.774927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-28 17:20:20.774933 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.774948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-28 17:20:20.774960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-28 17:20:20.774967 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.774973 | orchestrator | 2025-05-28 17:20:20.774980 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-05-28 17:20:20.774987 | orchestrator | Wednesday 28 May 2025 17:17:25 +0000 (0:00:02.105) 0:03:17.987 ********* 2025-05-28 17:20:20.774994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-28 17:20:20.775001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-28 17:20:20.775008 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.775015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-28 17:20:20.775022 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-28 17:20:20.775033 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.775044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-28 17:20:20.775054 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-28 17:20:20.775061 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.775068 | orchestrator | 2025-05-28 17:20:20.775074 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-05-28 17:20:20.775081 | orchestrator | Wednesday 28 May 2025 17:17:28 +0000 (0:00:02.557) 0:03:20.545 ********* 2025-05-28 17:20:20.775088 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:20:20.775094 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:20:20.775101 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:20:20.775107 | orchestrator | 2025-05-28 17:20:20.775114 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-05-28 17:20:20.775121 | orchestrator | Wednesday 28 May 2025 17:17:30 +0000 (0:00:02.110) 0:03:22.656 ********* 2025-05-28 17:20:20.775128 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.775134 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.775141 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.775147 | orchestrator | 2025-05-28 17:20:20.775154 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-05-28 17:20:20.775160 | orchestrator | Wednesday 28 May 2025 17:17:31 +0000 (0:00:01.377) 0:03:24.033 ********* 2025-05-28 17:20:20.775167 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.775173 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.775180 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.775186 | orchestrator | 2025-05-28 17:20:20.775193 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-05-28 17:20:20.775200 | orchestrator | Wednesday 28 May 2025 17:17:31 +0000 (0:00:00.297) 0:03:24.331 ********* 2025-05-28 17:20:20.775206 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:20:20.775213 | orchestrator | 2025-05-28 17:20:20.775219 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-05-28 17:20:20.775226 | orchestrator | Wednesday 28 May 2025 17:17:32 +0000 (0:00:01.078) 0:03:25.410 ********* 2025-05-28 17:20:20.775233 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-05-28 17:20:20.775244 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-05-28 17:20:20.775256 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-05-28 17:20:20.775263 | orchestrator | 2025-05-28 17:20:20.775273 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-05-28 17:20:20.775280 | orchestrator | Wednesday 28 May 2025 17:17:34 +0000 (0:00:01.768) 0:03:27.179 ********* 2025-05-28 17:20:20.775287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-05-28 17:20:20.775294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-05-28 17:20:20.775301 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.775307 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.775314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-05-28 17:20:20.775335 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.775342 | orchestrator | 2025-05-28 17:20:20.775349 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-05-28 17:20:20.775355 | orchestrator | Wednesday 28 May 2025 17:17:35 +0000 (0:00:00.395) 0:03:27.574 ********* 2025-05-28 17:20:20.775375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-05-28 17:20:20.775382 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.775389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-05-28 17:20:20.775396 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.775406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-05-28 17:20:20.775413 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.775420 | orchestrator | 2025-05-28 17:20:20.775426 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-05-28 17:20:20.775434 | orchestrator | Wednesday 28 May 2025 17:17:35 +0000 (0:00:00.598) 0:03:28.173 ********* 2025-05-28 17:20:20.775440 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.775447 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.775453 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.775460 | orchestrator | 2025-05-28 17:20:20.775466 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-05-28 17:20:20.775473 | orchestrator | Wednesday 28 May 2025 17:17:36 +0000 (0:00:00.750) 0:03:28.924 ********* 2025-05-28 17:20:20.775479 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.775490 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.775496 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.775503 | orchestrator | 2025-05-28 17:20:20.775509 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-05-28 17:20:20.775516 | orchestrator | Wednesday 28 May 2025 17:17:37 +0000 (0:00:01.205) 0:03:30.130 ********* 2025-05-28 17:20:20.775523 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.775529 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.775536 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.775542 | orchestrator | 2025-05-28 17:20:20.775549 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-05-28 17:20:20.775555 | orchestrator | Wednesday 28 May 2025 17:17:37 +0000 (0:00:00.300) 0:03:30.430 ********* 2025-05-28 17:20:20.775562 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:20:20.775569 | orchestrator | 2025-05-28 17:20:20.775575 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-05-28 17:20:20.775582 | orchestrator | Wednesday 28 May 2025 17:17:39 +0000 (0:00:01.365) 0:03:31.796 ********* 2025-05-28 17:20:20.775593 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-28 17:20:20.775600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.775608 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.775619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.775629 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-28 17:20:20.775641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-28 17:20:20.775648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.775655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.775667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.775674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-28 17:20:20.775686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.775698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-28 17:20:20.775705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-28 17:20:20.775712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.775719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.775730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 17:20:20.776937 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-28 17:20:20.776970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.776978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-28 17:20:20.776985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 17:20:20.776993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.776999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 17:20:20.777015 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-28 17:20:20.777026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.777036 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 17:20:20.777043 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.777049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-28 17:20:20.777056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.777068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-28 17:20:20.777078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-28 17:20:20.777090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-28 17:20:20.777097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-28 17:20:20.777104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.777111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.777124 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-28 17:20:20.777136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.777143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.777149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.777156 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-28 17:20:20.777304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.777324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-28 17:20:20.777331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-28 17:20:20.777337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.777343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 17:20:20.777350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.777356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 17:20:20.777410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-28 17:20:20.777442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.777450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-28 17:20:20.777456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-28 17:20:20.777463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.777469 | orchestrator | 2025-05-28 17:20:20.777476 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-05-28 17:20:20.777482 | orchestrator | Wednesday 28 May 2025 17:17:43 +0000 (0:00:04.016) 0:03:35.812 ********* 2025-05-28 17:20:20.777536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-28 17:20:20.778743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.778760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.778768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.778775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-28 17:20:20.778782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.778872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-28 17:20:20.778886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-28 17:20:20.778893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-28 17:20:20.778900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.778907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.778913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 17:20:20.778967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.778980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.778987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.778993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 17:20:20.779000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-28 17:20:20.779006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-28 17:20:20.779062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.779075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.779082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-28 17:20:20.779089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-28 17:20:20.779096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-28 17:20:20.779107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-28 17:20:20.779155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-28 17:20:20.779167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.779174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.779180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.779186 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.779193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.779219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 17:20:20.779269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.779281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.779288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 17:20:20.779294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-28 17:20:20.779301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-28 17:20:20.779312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.779359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.779432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-28 17:20:20.779440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-28 17:20:20.779446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-28 17:20:20.779453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-28 17:20:20.779465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.779493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.779520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 17:20:20.779527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.779533 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.779540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 17:20:20.779546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-28 17:20:20.779559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.779585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-28 17:20:20.779595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-28 17:20:20.779602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.779608 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.779614 | orchestrator | 2025-05-28 17:20:20.779621 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-05-28 17:20:20.779627 | orchestrator | Wednesday 28 May 2025 17:17:44 +0000 (0:00:01.457) 0:03:37.270 ********* 2025-05-28 17:20:20.779634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-05-28 17:20:20.779641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-05-28 17:20:20.779647 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.779657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-05-28 17:20:20.779663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-05-28 17:20:20.779669 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.779675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-05-28 17:20:20.779681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-05-28 17:20:20.779687 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.779693 | orchestrator | 2025-05-28 17:20:20.779700 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-05-28 17:20:20.779706 | orchestrator | Wednesday 28 May 2025 17:17:47 +0000 (0:00:02.194) 0:03:39.464 ********* 2025-05-28 17:20:20.779712 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:20:20.779718 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:20:20.779724 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:20:20.779730 | orchestrator | 2025-05-28 17:20:20.779736 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-05-28 17:20:20.779742 | orchestrator | Wednesday 28 May 2025 17:17:48 +0000 (0:00:01.329) 0:03:40.794 ********* 2025-05-28 17:20:20.779748 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:20:20.779754 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:20:20.779760 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:20:20.779766 | orchestrator | 2025-05-28 17:20:20.779772 | orchestrator | TASK [include_role : placement] ************************************************ 2025-05-28 17:20:20.779778 | orchestrator | Wednesday 28 May 2025 17:17:50 +0000 (0:00:02.115) 0:03:42.909 ********* 2025-05-28 17:20:20.779784 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:20:20.779790 | orchestrator | 2025-05-28 17:20:20.779796 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-05-28 17:20:20.779802 | orchestrator | Wednesday 28 May 2025 17:17:51 +0000 (0:00:01.222) 0:03:44.132 ********* 2025-05-28 17:20:20.779830 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-28 17:20:20.779838 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-28 17:20:20.779849 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-28 17:20:20.779855 | orchestrator | 2025-05-28 17:20:20.779860 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-05-28 17:20:20.779866 | orchestrator | Wednesday 28 May 2025 17:17:54 +0000 (0:00:03.296) 0:03:47.429 ********* 2025-05-28 17:20:20.779871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-28 17:20:20.779877 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.779898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-28 17:20:20.779904 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.779913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-28 17:20:20.779923 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.779928 | orchestrator | 2025-05-28 17:20:20.779933 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-05-28 17:20:20.779939 | orchestrator | Wednesday 28 May 2025 17:17:55 +0000 (0:00:00.497) 0:03:47.927 ********* 2025-05-28 17:20:20.779944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-28 17:20:20.779950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-28 17:20:20.779956 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.779961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-28 17:20:20.779967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-28 17:20:20.779972 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.779978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-28 17:20:20.779983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-28 17:20:20.779988 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.779994 | orchestrator | 2025-05-28 17:20:20.780000 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-05-28 17:20:20.780006 | orchestrator | Wednesday 28 May 2025 17:17:56 +0000 (0:00:00.734) 0:03:48.661 ********* 2025-05-28 17:20:20.780013 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:20:20.780018 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:20:20.780024 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:20:20.780030 | orchestrator | 2025-05-28 17:20:20.780036 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-05-28 17:20:20.780042 | orchestrator | Wednesday 28 May 2025 17:17:57 +0000 (0:00:01.621) 0:03:50.282 ********* 2025-05-28 17:20:20.780048 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:20:20.780054 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:20:20.780060 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:20:20.780066 | orchestrator | 2025-05-28 17:20:20.780072 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-05-28 17:20:20.780078 | orchestrator | Wednesday 28 May 2025 17:17:59 +0000 (0:00:01.976) 0:03:52.259 ********* 2025-05-28 17:20:20.780084 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:20:20.780090 | orchestrator | 2025-05-28 17:20:20.780096 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-05-28 17:20:20.780102 | orchestrator | Wednesday 28 May 2025 17:18:01 +0000 (0:00:01.224) 0:03:53.483 ********* 2025-05-28 17:20:20.780129 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-28 17:20:20.780140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.780147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.780154 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-28 17:20:20.780177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.780188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.780255 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-28 17:20:20.780271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.780278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.780283 | orchestrator | 2025-05-28 17:20:20.780289 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-05-28 17:20:20.780295 | orchestrator | Wednesday 28 May 2025 17:18:05 +0000 (0:00:04.290) 0:03:57.773 ********* 2025-05-28 17:20:20.780322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-28 17:20:20.780337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.780343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.780348 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.780354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-28 17:20:20.780376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.780382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.780402 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.780436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-28 17:20:20.780444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.780450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.780455 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.780461 | orchestrator | 2025-05-28 17:20:20.780466 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-05-28 17:20:20.780472 | orchestrator | Wednesday 28 May 2025 17:18:06 +0000 (0:00:00.964) 0:03:58.737 ********* 2025-05-28 17:20:20.780477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-28 17:20:20.780483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-28 17:20:20.780489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-28 17:20:20.780495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-28 17:20:20.780505 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.780510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-28 17:20:20.780516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-28 17:20:20.780538 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-28 17:20:20.780544 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-28 17:20:20.780550 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.780555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-28 17:20:20.780564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-28 17:20:20.780570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-28 17:20:20.780575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-28 17:20:20.780580 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.780586 | orchestrator | 2025-05-28 17:20:20.780591 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-05-28 17:20:20.780597 | orchestrator | Wednesday 28 May 2025 17:18:07 +0000 (0:00:00.921) 0:03:59.659 ********* 2025-05-28 17:20:20.780602 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:20:20.780607 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:20:20.780613 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:20:20.780618 | orchestrator | 2025-05-28 17:20:20.780623 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-05-28 17:20:20.780628 | orchestrator | Wednesday 28 May 2025 17:18:08 +0000 (0:00:01.628) 0:04:01.288 ********* 2025-05-28 17:20:20.780634 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:20:20.780639 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:20:20.780644 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:20:20.780650 | orchestrator | 2025-05-28 17:20:20.780655 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-05-28 17:20:20.780660 | orchestrator | Wednesday 28 May 2025 17:18:10 +0000 (0:00:01.942) 0:04:03.230 ********* 2025-05-28 17:20:20.780666 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:20:20.780671 | orchestrator | 2025-05-28 17:20:20.780676 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-05-28 17:20:20.780682 | orchestrator | Wednesday 28 May 2025 17:18:12 +0000 (0:00:01.536) 0:04:04.767 ********* 2025-05-28 17:20:20.780687 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-05-28 17:20:20.780692 | orchestrator | 2025-05-28 17:20:20.780702 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-05-28 17:20:20.780707 | orchestrator | Wednesday 28 May 2025 17:18:13 +0000 (0:00:01.102) 0:04:05.869 ********* 2025-05-28 17:20:20.780713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-05-28 17:20:20.780719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-05-28 17:20:20.780725 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-05-28 17:20:20.780730 | orchestrator | 2025-05-28 17:20:20.780750 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-05-28 17:20:20.780757 | orchestrator | Wednesday 28 May 2025 17:18:17 +0000 (0:00:03.964) 0:04:09.834 ********* 2025-05-28 17:20:20.780766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-28 17:20:20.780772 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.780777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-28 17:20:20.780783 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.780788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-28 17:20:20.780794 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.780799 | orchestrator | 2025-05-28 17:20:20.780804 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-05-28 17:20:20.780810 | orchestrator | Wednesday 28 May 2025 17:18:18 +0000 (0:00:01.274) 0:04:11.108 ********* 2025-05-28 17:20:20.780819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-28 17:20:20.780829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-28 17:20:20.780835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-28 17:20:20.780842 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.780847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-28 17:20:20.780853 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.780858 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-28 17:20:20.780864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-28 17:20:20.780869 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.780874 | orchestrator | 2025-05-28 17:20:20.780880 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-05-28 17:20:20.780885 | orchestrator | Wednesday 28 May 2025 17:18:20 +0000 (0:00:02.017) 0:04:13.125 ********* 2025-05-28 17:20:20.780890 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:20:20.780895 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:20:20.780901 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:20:20.780906 | orchestrator | 2025-05-28 17:20:20.780911 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-05-28 17:20:20.780916 | orchestrator | Wednesday 28 May 2025 17:18:23 +0000 (0:00:02.515) 0:04:15.641 ********* 2025-05-28 17:20:20.780922 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:20:20.780927 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:20:20.780932 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:20:20.780938 | orchestrator | 2025-05-28 17:20:20.780958 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-05-28 17:20:20.780964 | orchestrator | Wednesday 28 May 2025 17:18:26 +0000 (0:00:02.909) 0:04:18.550 ********* 2025-05-28 17:20:20.780970 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-05-28 17:20:20.780975 | orchestrator | 2025-05-28 17:20:20.780981 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-05-28 17:20:20.780986 | orchestrator | Wednesday 28 May 2025 17:18:26 +0000 (0:00:00.813) 0:04:19.364 ********* 2025-05-28 17:20:20.780994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-28 17:20:20.781004 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.781009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-28 17:20:20.781014 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.781020 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-28 17:20:20.781025 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.781031 | orchestrator | 2025-05-28 17:20:20.781036 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-05-28 17:20:20.781041 | orchestrator | Wednesday 28 May 2025 17:18:28 +0000 (0:00:01.230) 0:04:20.594 ********* 2025-05-28 17:20:20.781047 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-28 17:20:20.781052 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.781058 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-28 17:20:20.781063 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.781069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-28 17:20:20.781074 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.781079 | orchestrator | 2025-05-28 17:20:20.781100 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-05-28 17:20:20.781106 | orchestrator | Wednesday 28 May 2025 17:18:29 +0000 (0:00:01.673) 0:04:22.267 ********* 2025-05-28 17:20:20.781111 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.781117 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.781122 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.781127 | orchestrator | 2025-05-28 17:20:20.781133 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-05-28 17:20:20.781138 | orchestrator | Wednesday 28 May 2025 17:18:30 +0000 (0:00:01.156) 0:04:23.424 ********* 2025-05-28 17:20:20.781148 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:20:20.781154 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:20:20.781159 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:20:20.781164 | orchestrator | 2025-05-28 17:20:20.781173 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-05-28 17:20:20.781178 | orchestrator | Wednesday 28 May 2025 17:18:33 +0000 (0:00:02.374) 0:04:25.798 ********* 2025-05-28 17:20:20.781183 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:20:20.781189 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:20:20.781194 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:20:20.781199 | orchestrator | 2025-05-28 17:20:20.781205 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-05-28 17:20:20.781210 | orchestrator | Wednesday 28 May 2025 17:18:36 +0000 (0:00:02.970) 0:04:28.769 ********* 2025-05-28 17:20:20.781215 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-05-28 17:20:20.781220 | orchestrator | 2025-05-28 17:20:20.781226 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-05-28 17:20:20.781231 | orchestrator | Wednesday 28 May 2025 17:18:37 +0000 (0:00:01.055) 0:04:29.825 ********* 2025-05-28 17:20:20.781237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-28 17:20:20.781242 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.781248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-28 17:20:20.781253 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.781258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-28 17:20:20.781264 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.781269 | orchestrator | 2025-05-28 17:20:20.781274 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-05-28 17:20:20.781279 | orchestrator | Wednesday 28 May 2025 17:18:38 +0000 (0:00:00.980) 0:04:30.806 ********* 2025-05-28 17:20:20.781285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-28 17:20:20.781295 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.781316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-28 17:20:20.781322 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.781339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-28 17:20:20.781345 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.781350 | orchestrator | 2025-05-28 17:20:20.781356 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-05-28 17:20:20.781374 | orchestrator | Wednesday 28 May 2025 17:18:39 +0000 (0:00:01.270) 0:04:32.076 ********* 2025-05-28 17:20:20.781380 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.781385 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.781391 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.781396 | orchestrator | 2025-05-28 17:20:20.781401 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-05-28 17:20:20.781407 | orchestrator | Wednesday 28 May 2025 17:18:41 +0000 (0:00:01.737) 0:04:33.813 ********* 2025-05-28 17:20:20.781412 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:20:20.781417 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:20:20.781423 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:20:20.781428 | orchestrator | 2025-05-28 17:20:20.781433 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-05-28 17:20:20.781439 | orchestrator | Wednesday 28 May 2025 17:18:43 +0000 (0:00:02.331) 0:04:36.145 ********* 2025-05-28 17:20:20.781444 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:20:20.781449 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:20:20.781455 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:20:20.781460 | orchestrator | 2025-05-28 17:20:20.781465 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-05-28 17:20:20.781470 | orchestrator | Wednesday 28 May 2025 17:18:46 +0000 (0:00:03.193) 0:04:39.339 ********* 2025-05-28 17:20:20.781476 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:20:20.781481 | orchestrator | 2025-05-28 17:20:20.781486 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-05-28 17:20:20.781492 | orchestrator | Wednesday 28 May 2025 17:18:48 +0000 (0:00:01.309) 0:04:40.649 ********* 2025-05-28 17:20:20.781497 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-28 17:20:20.781508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-28 17:20:20.781514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-28 17:20:20.781537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-28 17:20:20.781547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.781553 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-28 17:20:20.781559 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-28 17:20:20.781568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-28 17:20:20.781574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-28 17:20:20.781595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-28 17:20:20.781605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-28 17:20:20.781611 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-28 17:20:20.781617 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-28 17:20:20.781622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.781631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.781637 | orchestrator | 2025-05-28 17:20:20.781643 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-05-28 17:20:20.781648 | orchestrator | Wednesday 28 May 2025 17:18:51 +0000 (0:00:03.590) 0:04:44.240 ********* 2025-05-28 17:20:20.781672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-28 17:20:20.781679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-28 17:20:20.781684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-28 17:20:20.781690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-28 17:20:20.781699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.781705 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.781710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-28 17:20:20.781731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-28 17:20:20.781741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-28 17:20:20.781747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-28 17:20:20.781752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.781761 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.781767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-28 17:20:20.781773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-28 17:20:20.781794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-28 17:20:20.781812 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-28 17:20:20.781818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-28 17:20:20.781823 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.781829 | orchestrator | 2025-05-28 17:20:20.781834 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-05-28 17:20:20.781840 | orchestrator | Wednesday 28 May 2025 17:18:52 +0000 (0:00:00.637) 0:04:44.877 ********* 2025-05-28 17:20:20.781845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-28 17:20:20.781855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-28 17:20:20.781860 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.781866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-28 17:20:20.781871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-28 17:20:20.781876 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.781882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-28 17:20:20.781887 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-28 17:20:20.781892 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.781898 | orchestrator | 2025-05-28 17:20:20.781903 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-05-28 17:20:20.781908 | orchestrator | Wednesday 28 May 2025 17:18:53 +0000 (0:00:00.769) 0:04:45.647 ********* 2025-05-28 17:20:20.781914 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:20:20.781919 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:20:20.781924 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:20:20.781929 | orchestrator | 2025-05-28 17:20:20.781935 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-05-28 17:20:20.781940 | orchestrator | Wednesday 28 May 2025 17:18:54 +0000 (0:00:01.546) 0:04:47.194 ********* 2025-05-28 17:20:20.781945 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:20:20.781951 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:20:20.781956 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:20:20.781961 | orchestrator | 2025-05-28 17:20:20.781966 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-05-28 17:20:20.781972 | orchestrator | Wednesday 28 May 2025 17:18:56 +0000 (0:00:02.041) 0:04:49.236 ********* 2025-05-28 17:20:20.781977 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:20:20.781982 | orchestrator | 2025-05-28 17:20:20.781988 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-05-28 17:20:20.781993 | orchestrator | Wednesday 28 May 2025 17:18:58 +0000 (0:00:01.328) 0:04:50.564 ********* 2025-05-28 17:20:20.782036 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-28 17:20:20.782044 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-28 17:20:20.782056 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-28 17:20:20.782062 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-28 17:20:20.782086 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-28 17:20:20.782098 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-28 17:20:20.782108 | orchestrator | 2025-05-28 17:20:20.782114 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-05-28 17:20:20.782119 | orchestrator | Wednesday 28 May 2025 17:19:02 +0000 (0:00:04.683) 0:04:55.248 ********* 2025-05-28 17:20:20.782125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-28 17:20:20.782131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-28 17:20:20.782137 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.782158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-28 17:20:20.782177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-28 17:20:20.782188 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.782193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-28 17:20:20.782199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-28 17:20:20.782205 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.782210 | orchestrator | 2025-05-28 17:20:20.782215 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-05-28 17:20:20.782221 | orchestrator | Wednesday 28 May 2025 17:19:03 +0000 (0:00:00.795) 0:04:56.043 ********* 2025-05-28 17:20:20.782226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-05-28 17:20:20.782232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-28 17:20:20.782256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-28 17:20:20.782263 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.782268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-05-28 17:20:20.782277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-28 17:20:20.782286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-28 17:20:20.782292 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.782297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-05-28 17:20:20.782303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-28 17:20:20.782308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-28 17:20:20.782314 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.782319 | orchestrator | 2025-05-28 17:20:20.782325 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-05-28 17:20:20.782330 | orchestrator | Wednesday 28 May 2025 17:19:04 +0000 (0:00:00.816) 0:04:56.860 ********* 2025-05-28 17:20:20.782335 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.782340 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.782346 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.782351 | orchestrator | 2025-05-28 17:20:20.782356 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-05-28 17:20:20.782372 | orchestrator | Wednesday 28 May 2025 17:19:04 +0000 (0:00:00.419) 0:04:57.279 ********* 2025-05-28 17:20:20.782377 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.782383 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.782388 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.782393 | orchestrator | 2025-05-28 17:20:20.782399 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-05-28 17:20:20.782404 | orchestrator | Wednesday 28 May 2025 17:19:06 +0000 (0:00:01.349) 0:04:58.629 ********* 2025-05-28 17:20:20.782409 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:20:20.782415 | orchestrator | 2025-05-28 17:20:20.782420 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-05-28 17:20:20.782425 | orchestrator | Wednesday 28 May 2025 17:19:07 +0000 (0:00:01.672) 0:05:00.302 ********* 2025-05-28 17:20:20.782431 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-28 17:20:20.782437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-28 17:20:20.782464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:20:20.782474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:20:20.782480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-28 17:20:20.782485 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-28 17:20:20.782491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-28 17:20:20.782497 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-28 17:20:20.782506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:20:20.782528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:20:20.782538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-28 17:20:20.782544 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-28 17:20:20.782550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:20:20.782555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:20:20.782561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-28 17:20:20.782567 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-28 17:20:20.782579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-28 17:20:20.782588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:20:20.782593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:20:20.782599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-28 17:20:20.782605 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-28 17:20:20.782615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-28 17:20:20.782624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:20:20.782633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-28 17:20:20.782639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:20:20.782645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-28 17:20:20.782650 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-28 17:20:20.782659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:20:20.782667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:20:20.782673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-28 17:20:20.782678 | orchestrator | 2025-05-28 17:20:20.782688 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-05-28 17:20:20.782694 | orchestrator | Wednesday 28 May 2025 17:19:11 +0000 (0:00:03.989) 0:05:04.291 ********* 2025-05-28 17:20:20.782700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-28 17:20:20.782705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-28 17:20:20.782711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:20:20.782720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:20:20.782726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-28 17:20:20.782736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-28 17:20:20.782745 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-28 17:20:20.782751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:20:20.782756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:20:20.782766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-28 17:20:20.782772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-28 17:20:20.782778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-28 17:20:20.782785 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.782791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:20:20.782799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:20:20.782805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-28 17:20:20.782811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-28 17:20:20.782831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-28 17:20:20.782837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:20:20.782845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:20:20.782868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-28 17:20:20.782874 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.782879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-28 17:20:20.782885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-28 17:20:20.782897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:20:20.782902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:20:20.782908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-28 17:20:20.782916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-28 17:20:20.782925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-28 17:20:20.782931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:20:20.782941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:20:20.782946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-28 17:20:20.782952 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.782957 | orchestrator | 2025-05-28 17:20:20.782963 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-05-28 17:20:20.782968 | orchestrator | Wednesday 28 May 2025 17:19:13 +0000 (0:00:01.275) 0:05:05.567 ********* 2025-05-28 17:20:20.782974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-05-28 17:20:20.782979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-05-28 17:20:20.782985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-28 17:20:20.782993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-28 17:20:20.783000 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.783006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-05-28 17:20:20.783011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-05-28 17:20:20.783019 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-28 17:20:20.783025 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-28 17:20:20.783031 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.783040 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-05-28 17:20:20.783045 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-05-28 17:20:20.783051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-28 17:20:20.783057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-28 17:20:20.783062 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.783068 | orchestrator | 2025-05-28 17:20:20.783073 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-05-28 17:20:20.783078 | orchestrator | Wednesday 28 May 2025 17:19:14 +0000 (0:00:00.991) 0:05:06.558 ********* 2025-05-28 17:20:20.783084 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.783089 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.783094 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.783102 | orchestrator | 2025-05-28 17:20:20.783108 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-05-28 17:20:20.783113 | orchestrator | Wednesday 28 May 2025 17:19:14 +0000 (0:00:00.460) 0:05:07.018 ********* 2025-05-28 17:20:20.783118 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.783124 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.783129 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.783135 | orchestrator | 2025-05-28 17:20:20.783140 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-05-28 17:20:20.783145 | orchestrator | Wednesday 28 May 2025 17:19:15 +0000 (0:00:01.375) 0:05:08.394 ********* 2025-05-28 17:20:20.783151 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:20:20.783156 | orchestrator | 2025-05-28 17:20:20.783162 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-05-28 17:20:20.783167 | orchestrator | Wednesday 28 May 2025 17:19:17 +0000 (0:00:01.793) 0:05:10.188 ********* 2025-05-28 17:20:20.783175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-28 17:20:20.783184 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-28 17:20:20.783194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-28 17:20:20.783200 | orchestrator | 2025-05-28 17:20:20.783206 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-05-28 17:20:20.783211 | orchestrator | Wednesday 28 May 2025 17:19:20 +0000 (0:00:02.474) 0:05:12.662 ********* 2025-05-28 17:20:20.783217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-05-28 17:20:20.783223 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.783231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-05-28 17:20:20.783237 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.783245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-05-28 17:20:20.783256 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.783261 | orchestrator | 2025-05-28 17:20:20.783267 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-05-28 17:20:20.783272 | orchestrator | Wednesday 28 May 2025 17:19:20 +0000 (0:00:00.378) 0:05:13.040 ********* 2025-05-28 17:20:20.783278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-05-28 17:20:20.783283 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.783289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-05-28 17:20:20.783294 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.783299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-05-28 17:20:20.783305 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.783310 | orchestrator | 2025-05-28 17:20:20.783315 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-05-28 17:20:20.783321 | orchestrator | Wednesday 28 May 2025 17:19:21 +0000 (0:00:01.027) 0:05:14.068 ********* 2025-05-28 17:20:20.783326 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.783332 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.783337 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.783342 | orchestrator | 2025-05-28 17:20:20.783348 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-05-28 17:20:20.783353 | orchestrator | Wednesday 28 May 2025 17:19:22 +0000 (0:00:00.457) 0:05:14.525 ********* 2025-05-28 17:20:20.783358 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.783427 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.783433 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.783438 | orchestrator | 2025-05-28 17:20:20.783444 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-05-28 17:20:20.783449 | orchestrator | Wednesday 28 May 2025 17:19:23 +0000 (0:00:01.302) 0:05:15.828 ********* 2025-05-28 17:20:20.783454 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:20:20.783460 | orchestrator | 2025-05-28 17:20:20.783465 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-05-28 17:20:20.783471 | orchestrator | Wednesday 28 May 2025 17:19:25 +0000 (0:00:01.773) 0:05:17.601 ********* 2025-05-28 17:20:20.783476 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-05-28 17:20:20.783493 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-05-28 17:20:20.783499 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-05-28 17:20:20.783505 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-05-28 17:20:20.783511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-05-28 17:20:20.783524 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-05-28 17:20:20.783530 | orchestrator | 2025-05-28 17:20:20.783535 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-05-28 17:20:20.783541 | orchestrator | Wednesday 28 May 2025 17:19:31 +0000 (0:00:05.968) 0:05:23.570 ********* 2025-05-28 17:20:20.783584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-05-28 17:20:20.783597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-05-28 17:20:20.783603 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.783608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-05-28 17:20:20.783621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-05-28 17:20:20.783627 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.783645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-05-28 17:20:20.783651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-05-28 17:20:20.783657 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.783662 | orchestrator | 2025-05-28 17:20:20.783668 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-05-28 17:20:20.783673 | orchestrator | Wednesday 28 May 2025 17:19:31 +0000 (0:00:00.610) 0:05:24.180 ********* 2025-05-28 17:20:20.783679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-28 17:20:20.783685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-28 17:20:20.783690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-28 17:20:20.783700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-28 17:20:20.783706 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.783711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-28 17:20:20.783717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-28 17:20:20.783722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-28 17:20:20.783727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-28 17:20:20.783736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-28 17:20:20.783742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-28 17:20:20.783747 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.783752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-28 17:20:20.783760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-28 17:20:20.783765 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.783769 | orchestrator | 2025-05-28 17:20:20.783774 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-05-28 17:20:20.783779 | orchestrator | Wednesday 28 May 2025 17:19:33 +0000 (0:00:01.667) 0:05:25.848 ********* 2025-05-28 17:20:20.783784 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:20:20.783789 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:20:20.783793 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:20:20.783798 | orchestrator | 2025-05-28 17:20:20.783803 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-05-28 17:20:20.783808 | orchestrator | Wednesday 28 May 2025 17:19:34 +0000 (0:00:01.251) 0:05:27.099 ********* 2025-05-28 17:20:20.783812 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:20:20.783817 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:20:20.783822 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:20:20.783827 | orchestrator | 2025-05-28 17:20:20.783831 | orchestrator | TASK [include_role : swift] **************************************************** 2025-05-28 17:20:20.783836 | orchestrator | Wednesday 28 May 2025 17:19:36 +0000 (0:00:02.061) 0:05:29.161 ********* 2025-05-28 17:20:20.783841 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.783846 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.783850 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.783855 | orchestrator | 2025-05-28 17:20:20.783860 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-05-28 17:20:20.783869 | orchestrator | Wednesday 28 May 2025 17:19:37 +0000 (0:00:00.337) 0:05:29.498 ********* 2025-05-28 17:20:20.783874 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.783879 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.783883 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.783888 | orchestrator | 2025-05-28 17:20:20.783893 | orchestrator | TASK [include_role : trove] **************************************************** 2025-05-28 17:20:20.783898 | orchestrator | Wednesday 28 May 2025 17:19:37 +0000 (0:00:00.591) 0:05:30.089 ********* 2025-05-28 17:20:20.783902 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.783907 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.783912 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.783917 | orchestrator | 2025-05-28 17:20:20.783921 | orchestrator | TASK [include_role : venus] **************************************************** 2025-05-28 17:20:20.783926 | orchestrator | Wednesday 28 May 2025 17:19:37 +0000 (0:00:00.318) 0:05:30.408 ********* 2025-05-28 17:20:20.783931 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.783936 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.783941 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.783945 | orchestrator | 2025-05-28 17:20:20.783950 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-05-28 17:20:20.783955 | orchestrator | Wednesday 28 May 2025 17:19:38 +0000 (0:00:00.304) 0:05:30.712 ********* 2025-05-28 17:20:20.783960 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.783964 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.783969 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.783974 | orchestrator | 2025-05-28 17:20:20.783979 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-05-28 17:20:20.783983 | orchestrator | Wednesday 28 May 2025 17:19:38 +0000 (0:00:00.297) 0:05:31.010 ********* 2025-05-28 17:20:20.783988 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.783993 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.783998 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.784002 | orchestrator | 2025-05-28 17:20:20.784007 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-05-28 17:20:20.784012 | orchestrator | Wednesday 28 May 2025 17:19:39 +0000 (0:00:00.802) 0:05:31.813 ********* 2025-05-28 17:20:20.784017 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:20:20.784022 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:20:20.784026 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:20:20.784031 | orchestrator | 2025-05-28 17:20:20.784036 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-05-28 17:20:20.784041 | orchestrator | Wednesday 28 May 2025 17:19:40 +0000 (0:00:00.657) 0:05:32.471 ********* 2025-05-28 17:20:20.784045 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:20:20.784050 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:20:20.784055 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:20:20.784060 | orchestrator | 2025-05-28 17:20:20.784065 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-05-28 17:20:20.784069 | orchestrator | Wednesday 28 May 2025 17:19:40 +0000 (0:00:00.340) 0:05:32.811 ********* 2025-05-28 17:20:20.784074 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:20:20.784079 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:20:20.784083 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:20:20.784088 | orchestrator | 2025-05-28 17:20:20.784093 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-05-28 17:20:20.784098 | orchestrator | Wednesday 28 May 2025 17:19:41 +0000 (0:00:01.141) 0:05:33.953 ********* 2025-05-28 17:20:20.784103 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:20:20.784107 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:20:20.784114 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:20:20.784119 | orchestrator | 2025-05-28 17:20:20.784124 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-05-28 17:20:20.784129 | orchestrator | Wednesday 28 May 2025 17:19:42 +0000 (0:00:00.851) 0:05:34.805 ********* 2025-05-28 17:20:20.784137 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:20:20.784141 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:20:20.784146 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:20:20.784151 | orchestrator | 2025-05-28 17:20:20.784156 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-05-28 17:20:20.784160 | orchestrator | Wednesday 28 May 2025 17:19:43 +0000 (0:00:00.859) 0:05:35.664 ********* 2025-05-28 17:20:20.784165 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:20:20.784170 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:20:20.784175 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:20:20.784179 | orchestrator | 2025-05-28 17:20:20.784187 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-05-28 17:20:20.784192 | orchestrator | Wednesday 28 May 2025 17:19:47 +0000 (0:00:04.717) 0:05:40.382 ********* 2025-05-28 17:20:20.784197 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:20:20.784202 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:20:20.784206 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:20:20.784211 | orchestrator | 2025-05-28 17:20:20.784216 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-05-28 17:20:20.784221 | orchestrator | Wednesday 28 May 2025 17:19:51 +0000 (0:00:03.706) 0:05:44.088 ********* 2025-05-28 17:20:20.784225 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:20:20.784230 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:20:20.784235 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:20:20.784240 | orchestrator | 2025-05-28 17:20:20.784244 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-05-28 17:20:20.784249 | orchestrator | Wednesday 28 May 2025 17:20:04 +0000 (0:00:13.005) 0:05:57.093 ********* 2025-05-28 17:20:20.784254 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:20:20.784259 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:20:20.784263 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:20:20.784268 | orchestrator | 2025-05-28 17:20:20.784273 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-05-28 17:20:20.784278 | orchestrator | Wednesday 28 May 2025 17:20:05 +0000 (0:00:00.726) 0:05:57.820 ********* 2025-05-28 17:20:20.784283 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:20:20.784287 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:20:20.784292 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:20:20.784297 | orchestrator | 2025-05-28 17:20:20.784302 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-05-28 17:20:20.784306 | orchestrator | Wednesday 28 May 2025 17:20:09 +0000 (0:00:04.579) 0:06:02.399 ********* 2025-05-28 17:20:20.784311 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.784316 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.784321 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.784325 | orchestrator | 2025-05-28 17:20:20.784330 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-05-28 17:20:20.784335 | orchestrator | Wednesday 28 May 2025 17:20:10 +0000 (0:00:00.325) 0:06:02.725 ********* 2025-05-28 17:20:20.784340 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.784344 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.784349 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.784354 | orchestrator | 2025-05-28 17:20:20.784359 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-05-28 17:20:20.784378 | orchestrator | Wednesday 28 May 2025 17:20:10 +0000 (0:00:00.676) 0:06:03.402 ********* 2025-05-28 17:20:20.784383 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.784387 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.784392 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.784397 | orchestrator | 2025-05-28 17:20:20.784402 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-05-28 17:20:20.784407 | orchestrator | Wednesday 28 May 2025 17:20:11 +0000 (0:00:00.355) 0:06:03.758 ********* 2025-05-28 17:20:20.784411 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.784416 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.784425 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.784430 | orchestrator | 2025-05-28 17:20:20.784434 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-05-28 17:20:20.784439 | orchestrator | Wednesday 28 May 2025 17:20:11 +0000 (0:00:00.361) 0:06:04.119 ********* 2025-05-28 17:20:20.784444 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.784449 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.784453 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.784458 | orchestrator | 2025-05-28 17:20:20.784463 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-05-28 17:20:20.784468 | orchestrator | Wednesday 28 May 2025 17:20:12 +0000 (0:00:00.352) 0:06:04.471 ********* 2025-05-28 17:20:20.784472 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:20:20.784477 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:20:20.784482 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:20:20.784487 | orchestrator | 2025-05-28 17:20:20.784492 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-05-28 17:20:20.784496 | orchestrator | Wednesday 28 May 2025 17:20:12 +0000 (0:00:00.654) 0:06:05.126 ********* 2025-05-28 17:20:20.784501 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:20:20.784506 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:20:20.784511 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:20:20.784515 | orchestrator | 2025-05-28 17:20:20.784520 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-05-28 17:20:20.784525 | orchestrator | Wednesday 28 May 2025 17:20:17 +0000 (0:00:04.779) 0:06:09.906 ********* 2025-05-28 17:20:20.784530 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:20:20.784534 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:20:20.784539 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:20:20.784544 | orchestrator | 2025-05-28 17:20:20.784549 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 17:20:20.784554 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-05-28 17:20:20.784562 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-05-28 17:20:20.784567 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-05-28 17:20:20.784572 | orchestrator | 2025-05-28 17:20:20.784576 | orchestrator | 2025-05-28 17:20:20.784581 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 17:20:20.784586 | orchestrator | Wednesday 28 May 2025 17:20:18 +0000 (0:00:00.789) 0:06:10.695 ********* 2025-05-28 17:20:20.784591 | orchestrator | =============================================================================== 2025-05-28 17:20:20.784598 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 13.01s 2025-05-28 17:20:20.784603 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 6.36s 2025-05-28 17:20:20.784608 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 5.97s 2025-05-28 17:20:20.784613 | orchestrator | sysctl : Setting sysctl values ------------------------------------------ 5.80s 2025-05-28 17:20:20.784618 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 5.61s 2025-05-28 17:20:20.784622 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 5.08s 2025-05-28 17:20:20.784627 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.78s 2025-05-28 17:20:20.784632 | orchestrator | loadbalancer : Wait for haproxy to listen on VIP ------------------------ 4.78s 2025-05-28 17:20:20.784637 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 4.77s 2025-05-28 17:20:20.784641 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 4.72s 2025-05-28 17:20:20.784646 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 4.68s 2025-05-28 17:20:20.784655 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 4.58s 2025-05-28 17:20:20.784659 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 4.37s 2025-05-28 17:20:20.784664 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.29s 2025-05-28 17:20:20.784669 | orchestrator | haproxy-config : Copying over ceph-rgw haproxy config ------------------- 4.13s 2025-05-28 17:20:20.784674 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.07s 2025-05-28 17:20:20.784678 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.02s 2025-05-28 17:20:20.784683 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 3.99s 2025-05-28 17:20:20.784688 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 3.96s 2025-05-28 17:20:20.784693 | orchestrator | loadbalancer : Wait for backup haproxy to start ------------------------- 3.71s 2025-05-28 17:20:20.784697 | orchestrator | 2025-05-28 17:20:20 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:20:23.812463 | orchestrator | 2025-05-28 17:20:23 | INFO  | Task 75f8a76b-6ea2-42d1-99f7-97e14c9e1a7d is in state STARTED 2025-05-28 17:20:23.812596 | orchestrator | 2025-05-28 17:20:23 | INFO  | Task 4fcc0b0c-bcde-4847-b04f-c856fbe593ed is in state STARTED 2025-05-28 17:20:23.814067 | orchestrator | 2025-05-28 17:20:23 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:20:23.814167 | orchestrator | 2025-05-28 17:20:23 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:20:26.859611 | orchestrator | 2025-05-28 17:20:26 | INFO  | Task 75f8a76b-6ea2-42d1-99f7-97e14c9e1a7d is in state STARTED 2025-05-28 17:20:26.859739 | orchestrator | 2025-05-28 17:20:26 | INFO  | Task 4fcc0b0c-bcde-4847-b04f-c856fbe593ed is in state STARTED 2025-05-28 17:20:26.861439 | orchestrator | 2025-05-28 17:20:26 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:20:26.861706 | orchestrator | 2025-05-28 17:20:26 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:20:29.907660 | orchestrator | 2025-05-28 17:20:29 | INFO  | Task 75f8a76b-6ea2-42d1-99f7-97e14c9e1a7d is in state STARTED 2025-05-28 17:20:29.907794 | orchestrator | 2025-05-28 17:20:29 | INFO  | Task 4fcc0b0c-bcde-4847-b04f-c856fbe593ed is in state STARTED 2025-05-28 17:20:29.908058 | orchestrator | 2025-05-28 17:20:29 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:20:29.908082 | orchestrator | 2025-05-28 17:20:29 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:20:32.949556 | orchestrator | 2025-05-28 17:20:32 | INFO  | Task 75f8a76b-6ea2-42d1-99f7-97e14c9e1a7d is in state STARTED 2025-05-28 17:20:32.950208 | orchestrator | 2025-05-28 17:20:32 | INFO  | Task 4fcc0b0c-bcde-4847-b04f-c856fbe593ed is in state STARTED 2025-05-28 17:20:32.951041 | orchestrator | 2025-05-28 17:20:32 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:20:32.951083 | orchestrator | 2025-05-28 17:20:32 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:20:36.018718 | orchestrator | 2025-05-28 17:20:36 | INFO  | Task 75f8a76b-6ea2-42d1-99f7-97e14c9e1a7d is in state STARTED 2025-05-28 17:20:36.022881 | orchestrator | 2025-05-28 17:20:36 | INFO  | Task 4fcc0b0c-bcde-4847-b04f-c856fbe593ed is in state STARTED 2025-05-28 17:20:36.024090 | orchestrator | 2025-05-28 17:20:36 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:20:36.024116 | orchestrator | 2025-05-28 17:20:36 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:20:39.066705 | orchestrator | 2025-05-28 17:20:39 | INFO  | Task 75f8a76b-6ea2-42d1-99f7-97e14c9e1a7d is in state STARTED 2025-05-28 17:20:39.066882 | orchestrator | 2025-05-28 17:20:39 | INFO  | Task 4fcc0b0c-bcde-4847-b04f-c856fbe593ed is in state STARTED 2025-05-28 17:20:39.066898 | orchestrator | 2025-05-28 17:20:39 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:20:39.066911 | orchestrator | 2025-05-28 17:20:39 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:20:42.111399 | orchestrator | 2025-05-28 17:20:42 | INFO  | Task 75f8a76b-6ea2-42d1-99f7-97e14c9e1a7d is in state STARTED 2025-05-28 17:20:42.111685 | orchestrator | 2025-05-28 17:20:42 | INFO  | Task 4fcc0b0c-bcde-4847-b04f-c856fbe593ed is in state STARTED 2025-05-28 17:20:42.112369 | orchestrator | 2025-05-28 17:20:42 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:20:42.112396 | orchestrator | 2025-05-28 17:20:42 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:20:45.150773 | orchestrator | 2025-05-28 17:20:45 | INFO  | Task 75f8a76b-6ea2-42d1-99f7-97e14c9e1a7d is in state STARTED 2025-05-28 17:20:45.150897 | orchestrator | 2025-05-28 17:20:45 | INFO  | Task 4fcc0b0c-bcde-4847-b04f-c856fbe593ed is in state STARTED 2025-05-28 17:20:45.151469 | orchestrator | 2025-05-28 17:20:45 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:20:45.151493 | orchestrator | 2025-05-28 17:20:45 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:20:48.195003 | orchestrator | 2025-05-28 17:20:48 | INFO  | Task 75f8a76b-6ea2-42d1-99f7-97e14c9e1a7d is in state STARTED 2025-05-28 17:20:48.197739 | orchestrator | 2025-05-28 17:20:48 | INFO  | Task 4fcc0b0c-bcde-4847-b04f-c856fbe593ed is in state STARTED 2025-05-28 17:20:48.203072 | orchestrator | 2025-05-28 17:20:48 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:20:48.203138 | orchestrator | 2025-05-28 17:20:48 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:20:51.236775 | orchestrator | 2025-05-28 17:20:51 | INFO  | Task 75f8a76b-6ea2-42d1-99f7-97e14c9e1a7d is in state STARTED 2025-05-28 17:20:51.237836 | orchestrator | 2025-05-28 17:20:51 | INFO  | Task 4fcc0b0c-bcde-4847-b04f-c856fbe593ed is in state STARTED 2025-05-28 17:20:51.240029 | orchestrator | 2025-05-28 17:20:51 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:20:51.240064 | orchestrator | 2025-05-28 17:20:51 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:20:54.302114 | orchestrator | 2025-05-28 17:20:54 | INFO  | Task 75f8a76b-6ea2-42d1-99f7-97e14c9e1a7d is in state STARTED 2025-05-28 17:20:54.304152 | orchestrator | 2025-05-28 17:20:54 | INFO  | Task 4fcc0b0c-bcde-4847-b04f-c856fbe593ed is in state STARTED 2025-05-28 17:20:54.306863 | orchestrator | 2025-05-28 17:20:54 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:20:54.306890 | orchestrator | 2025-05-28 17:20:54 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:20:57.359637 | orchestrator | 2025-05-28 17:20:57 | INFO  | Task 75f8a76b-6ea2-42d1-99f7-97e14c9e1a7d is in state STARTED 2025-05-28 17:20:57.360966 | orchestrator | 2025-05-28 17:20:57 | INFO  | Task 4fcc0b0c-bcde-4847-b04f-c856fbe593ed is in state STARTED 2025-05-28 17:20:57.363168 | orchestrator | 2025-05-28 17:20:57 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:20:57.363197 | orchestrator | 2025-05-28 17:20:57 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:21:00.409179 | orchestrator | 2025-05-28 17:21:00 | INFO  | Task 75f8a76b-6ea2-42d1-99f7-97e14c9e1a7d is in state STARTED 2025-05-28 17:21:00.410660 | orchestrator | 2025-05-28 17:21:00 | INFO  | Task 4fcc0b0c-bcde-4847-b04f-c856fbe593ed is in state STARTED 2025-05-28 17:21:00.412964 | orchestrator | 2025-05-28 17:21:00 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:21:00.412982 | orchestrator | 2025-05-28 17:21:00 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:21:03.469260 | orchestrator | 2025-05-28 17:21:03 | INFO  | Task 75f8a76b-6ea2-42d1-99f7-97e14c9e1a7d is in state STARTED 2025-05-28 17:21:03.470119 | orchestrator | 2025-05-28 17:21:03 | INFO  | Task 4fcc0b0c-bcde-4847-b04f-c856fbe593ed is in state STARTED 2025-05-28 17:21:03.470870 | orchestrator | 2025-05-28 17:21:03 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:21:03.470899 | orchestrator | 2025-05-28 17:21:03 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:21:06.520023 | orchestrator | 2025-05-28 17:21:06 | INFO  | Task 75f8a76b-6ea2-42d1-99f7-97e14c9e1a7d is in state STARTED 2025-05-28 17:21:06.520600 | orchestrator | 2025-05-28 17:21:06 | INFO  | Task 4fcc0b0c-bcde-4847-b04f-c856fbe593ed is in state STARTED 2025-05-28 17:21:06.521575 | orchestrator | 2025-05-28 17:21:06 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:21:06.521599 | orchestrator | 2025-05-28 17:21:06 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:21:09.574404 | orchestrator | 2025-05-28 17:21:09 | INFO  | Task 75f8a76b-6ea2-42d1-99f7-97e14c9e1a7d is in state STARTED 2025-05-28 17:21:09.574853 | orchestrator | 2025-05-28 17:21:09 | INFO  | Task 4fcc0b0c-bcde-4847-b04f-c856fbe593ed is in state STARTED 2025-05-28 17:21:09.577372 | orchestrator | 2025-05-28 17:21:09 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:21:09.577454 | orchestrator | 2025-05-28 17:21:09 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:21:12.615916 | orchestrator | 2025-05-28 17:21:12 | INFO  | Task 75f8a76b-6ea2-42d1-99f7-97e14c9e1a7d is in state STARTED 2025-05-28 17:21:12.617026 | orchestrator | 2025-05-28 17:21:12 | INFO  | Task 4fcc0b0c-bcde-4847-b04f-c856fbe593ed is in state STARTED 2025-05-28 17:21:12.617390 | orchestrator | 2025-05-28 17:21:12 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:21:12.617432 | orchestrator | 2025-05-28 17:21:12 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:21:15.683086 | orchestrator | 2025-05-28 17:21:15 | INFO  | Task 75f8a76b-6ea2-42d1-99f7-97e14c9e1a7d is in state STARTED 2025-05-28 17:21:15.684291 | orchestrator | 2025-05-28 17:21:15 | INFO  | Task 4fcc0b0c-bcde-4847-b04f-c856fbe593ed is in state STARTED 2025-05-28 17:21:15.685897 | orchestrator | 2025-05-28 17:21:15 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:21:15.686445 | orchestrator | 2025-05-28 17:21:15 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:21:18.737129 | orchestrator | 2025-05-28 17:21:18 | INFO  | Task 75f8a76b-6ea2-42d1-99f7-97e14c9e1a7d is in state STARTED 2025-05-28 17:21:18.738573 | orchestrator | 2025-05-28 17:21:18 | INFO  | Task 4fcc0b0c-bcde-4847-b04f-c856fbe593ed is in state STARTED 2025-05-28 17:21:18.740106 | orchestrator | 2025-05-28 17:21:18 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:21:18.740132 | orchestrator | 2025-05-28 17:21:18 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:21:21.782725 | orchestrator | 2025-05-28 17:21:21 | INFO  | Task 75f8a76b-6ea2-42d1-99f7-97e14c9e1a7d is in state STARTED 2025-05-28 17:21:21.783227 | orchestrator | 2025-05-28 17:21:21 | INFO  | Task 4fcc0b0c-bcde-4847-b04f-c856fbe593ed is in state STARTED 2025-05-28 17:21:21.784568 | orchestrator | 2025-05-28 17:21:21 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:21:21.784596 | orchestrator | 2025-05-28 17:21:21 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:21:24.831613 | orchestrator | 2025-05-28 17:21:24 | INFO  | Task 75f8a76b-6ea2-42d1-99f7-97e14c9e1a7d is in state STARTED 2025-05-28 17:21:24.831912 | orchestrator | 2025-05-28 17:21:24 | INFO  | Task 4fcc0b0c-bcde-4847-b04f-c856fbe593ed is in state STARTED 2025-05-28 17:21:24.833030 | orchestrator | 2025-05-28 17:21:24 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:21:24.833057 | orchestrator | 2025-05-28 17:21:24 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:21:27.890451 | orchestrator | 2025-05-28 17:21:27 | INFO  | Task 75f8a76b-6ea2-42d1-99f7-97e14c9e1a7d is in state STARTED 2025-05-28 17:21:27.890884 | orchestrator | 2025-05-28 17:21:27 | INFO  | Task 4fcc0b0c-bcde-4847-b04f-c856fbe593ed is in state STARTED 2025-05-28 17:21:27.893818 | orchestrator | 2025-05-28 17:21:27 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:21:27.893858 | orchestrator | 2025-05-28 17:21:27 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:21:30.947476 | orchestrator | 2025-05-28 17:21:30 | INFO  | Task 75f8a76b-6ea2-42d1-99f7-97e14c9e1a7d is in state STARTED 2025-05-28 17:21:30.949734 | orchestrator | 2025-05-28 17:21:30 | INFO  | Task 4fcc0b0c-bcde-4847-b04f-c856fbe593ed is in state STARTED 2025-05-28 17:21:30.951779 | orchestrator | 2025-05-28 17:21:30 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:21:30.952254 | orchestrator | 2025-05-28 17:21:30 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:21:33.994445 | orchestrator | 2025-05-28 17:21:33 | INFO  | Task 75f8a76b-6ea2-42d1-99f7-97e14c9e1a7d is in state STARTED 2025-05-28 17:21:33.995250 | orchestrator | 2025-05-28 17:21:33 | INFO  | Task 4fcc0b0c-bcde-4847-b04f-c856fbe593ed is in state STARTED 2025-05-28 17:21:33.996708 | orchestrator | 2025-05-28 17:21:33 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:21:33.996758 | orchestrator | 2025-05-28 17:21:33 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:21:37.049228 | orchestrator | 2025-05-28 17:21:37 | INFO  | Task 75f8a76b-6ea2-42d1-99f7-97e14c9e1a7d is in state STARTED 2025-05-28 17:21:37.050120 | orchestrator | 2025-05-28 17:21:37 | INFO  | Task 4fcc0b0c-bcde-4847-b04f-c856fbe593ed is in state STARTED 2025-05-28 17:21:37.050701 | orchestrator | 2025-05-28 17:21:37 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:21:37.050938 | orchestrator | 2025-05-28 17:21:37 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:21:40.110960 | orchestrator | 2025-05-28 17:21:40 | INFO  | Task 75f8a76b-6ea2-42d1-99f7-97e14c9e1a7d is in state STARTED 2025-05-28 17:21:40.112061 | orchestrator | 2025-05-28 17:21:40 | INFO  | Task 4fcc0b0c-bcde-4847-b04f-c856fbe593ed is in state STARTED 2025-05-28 17:21:40.113222 | orchestrator | 2025-05-28 17:21:40 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:21:40.113437 | orchestrator | 2025-05-28 17:21:40 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:21:43.170907 | orchestrator | 2025-05-28 17:21:43 | INFO  | Task 75f8a76b-6ea2-42d1-99f7-97e14c9e1a7d is in state STARTED 2025-05-28 17:21:43.171042 | orchestrator | 2025-05-28 17:21:43 | INFO  | Task 4fcc0b0c-bcde-4847-b04f-c856fbe593ed is in state STARTED 2025-05-28 17:21:43.171685 | orchestrator | 2025-05-28 17:21:43 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:21:43.172408 | orchestrator | 2025-05-28 17:21:43 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:21:46.244094 | orchestrator | 2025-05-28 17:21:46 | INFO  | Task 75f8a76b-6ea2-42d1-99f7-97e14c9e1a7d is in state STARTED 2025-05-28 17:21:46.245584 | orchestrator | 2025-05-28 17:21:46 | INFO  | Task 4fcc0b0c-bcde-4847-b04f-c856fbe593ed is in state STARTED 2025-05-28 17:21:46.248545 | orchestrator | 2025-05-28 17:21:46 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:21:46.249174 | orchestrator | 2025-05-28 17:21:46 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:21:49.302790 | orchestrator | 2025-05-28 17:21:49 | INFO  | Task 75f8a76b-6ea2-42d1-99f7-97e14c9e1a7d is in state STARTED 2025-05-28 17:21:49.304030 | orchestrator | 2025-05-28 17:21:49 | INFO  | Task 4fcc0b0c-bcde-4847-b04f-c856fbe593ed is in state STARTED 2025-05-28 17:21:49.306892 | orchestrator | 2025-05-28 17:21:49 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:21:49.306934 | orchestrator | 2025-05-28 17:21:49 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:21:52.350832 | orchestrator | 2025-05-28 17:21:52 | INFO  | Task 75f8a76b-6ea2-42d1-99f7-97e14c9e1a7d is in state STARTED 2025-05-28 17:21:52.351415 | orchestrator | 2025-05-28 17:21:52 | INFO  | Task 4fcc0b0c-bcde-4847-b04f-c856fbe593ed is in state STARTED 2025-05-28 17:21:52.352932 | orchestrator | 2025-05-28 17:21:52 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:21:52.352961 | orchestrator | 2025-05-28 17:21:52 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:21:55.406589 | orchestrator | 2025-05-28 17:21:55 | INFO  | Task 75f8a76b-6ea2-42d1-99f7-97e14c9e1a7d is in state STARTED 2025-05-28 17:21:55.408376 | orchestrator | 2025-05-28 17:21:55 | INFO  | Task 4fcc0b0c-bcde-4847-b04f-c856fbe593ed is in state STARTED 2025-05-28 17:21:55.409810 | orchestrator | 2025-05-28 17:21:55 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:21:55.409839 | orchestrator | 2025-05-28 17:21:55 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:21:58.473178 | orchestrator | 2025-05-28 17:21:58 | INFO  | Task 75f8a76b-6ea2-42d1-99f7-97e14c9e1a7d is in state STARTED 2025-05-28 17:21:58.476933 | orchestrator | 2025-05-28 17:21:58 | INFO  | Task 4fcc0b0c-bcde-4847-b04f-c856fbe593ed is in state STARTED 2025-05-28 17:21:58.478448 | orchestrator | 2025-05-28 17:21:58 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:21:58.478716 | orchestrator | 2025-05-28 17:21:58 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:22:01.524925 | orchestrator | 2025-05-28 17:22:01 | INFO  | Task 75f8a76b-6ea2-42d1-99f7-97e14c9e1a7d is in state STARTED 2025-05-28 17:22:01.526257 | orchestrator | 2025-05-28 17:22:01 | INFO  | Task 4fcc0b0c-bcde-4847-b04f-c856fbe593ed is in state STARTED 2025-05-28 17:22:01.538970 | orchestrator | 2025-05-28 17:22:01 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:22:01.539128 | orchestrator | 2025-05-28 17:22:01 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:22:04.596386 | orchestrator | 2025-05-28 17:22:04 | INFO  | Task 75f8a76b-6ea2-42d1-99f7-97e14c9e1a7d is in state STARTED 2025-05-28 17:22:04.598831 | orchestrator | 2025-05-28 17:22:04 | INFO  | Task 4fcc0b0c-bcde-4847-b04f-c856fbe593ed is in state STARTED 2025-05-28 17:22:04.599676 | orchestrator | 2025-05-28 17:22:04 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:22:04.599768 | orchestrator | 2025-05-28 17:22:04 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:22:07.653522 | orchestrator | 2025-05-28 17:22:07 | INFO  | Task 75f8a76b-6ea2-42d1-99f7-97e14c9e1a7d is in state STARTED 2025-05-28 17:22:07.654733 | orchestrator | 2025-05-28 17:22:07 | INFO  | Task 4fcc0b0c-bcde-4847-b04f-c856fbe593ed is in state STARTED 2025-05-28 17:22:07.656622 | orchestrator | 2025-05-28 17:22:07 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:22:07.656726 | orchestrator | 2025-05-28 17:22:07 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:22:10.714407 | orchestrator | 2025-05-28 17:22:10 | INFO  | Task 75f8a76b-6ea2-42d1-99f7-97e14c9e1a7d is in state STARTED 2025-05-28 17:22:10.714957 | orchestrator | 2025-05-28 17:22:10 | INFO  | Task 4fcc0b0c-bcde-4847-b04f-c856fbe593ed is in state STARTED 2025-05-28 17:22:10.717008 | orchestrator | 2025-05-28 17:22:10 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:22:10.717152 | orchestrator | 2025-05-28 17:22:10 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:22:13.764835 | orchestrator | 2025-05-28 17:22:13 | INFO  | Task 75f8a76b-6ea2-42d1-99f7-97e14c9e1a7d is in state STARTED 2025-05-28 17:22:13.767271 | orchestrator | 2025-05-28 17:22:13 | INFO  | Task 4fcc0b0c-bcde-4847-b04f-c856fbe593ed is in state STARTED 2025-05-28 17:22:13.769547 | orchestrator | 2025-05-28 17:22:13 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:22:13.769577 | orchestrator | 2025-05-28 17:22:13 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:22:16.825873 | orchestrator | 2025-05-28 17:22:16 | INFO  | Task 75f8a76b-6ea2-42d1-99f7-97e14c9e1a7d is in state STARTED 2025-05-28 17:22:16.827999 | orchestrator | 2025-05-28 17:22:16 | INFO  | Task 4fcc0b0c-bcde-4847-b04f-c856fbe593ed is in state STARTED 2025-05-28 17:22:16.830119 | orchestrator | 2025-05-28 17:22:16 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:22:16.830422 | orchestrator | 2025-05-28 17:22:16 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:22:19.888554 | orchestrator | 2025-05-28 17:22:19 | INFO  | Task 75f8a76b-6ea2-42d1-99f7-97e14c9e1a7d is in state STARTED 2025-05-28 17:22:19.888684 | orchestrator | 2025-05-28 17:22:19 | INFO  | Task 4fcc0b0c-bcde-4847-b04f-c856fbe593ed is in state STARTED 2025-05-28 17:22:19.890140 | orchestrator | 2025-05-28 17:22:19 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:22:19.890238 | orchestrator | 2025-05-28 17:22:19 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:22:22.935139 | orchestrator | 2025-05-28 17:22:22 | INFO  | Task 75f8a76b-6ea2-42d1-99f7-97e14c9e1a7d is in state STARTED 2025-05-28 17:22:22.935394 | orchestrator | 2025-05-28 17:22:22 | INFO  | Task 4fcc0b0c-bcde-4847-b04f-c856fbe593ed is in state STARTED 2025-05-28 17:22:22.937237 | orchestrator | 2025-05-28 17:22:22 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:22:22.937786 | orchestrator | 2025-05-28 17:22:22 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:22:25.995049 | orchestrator | 2025-05-28 17:22:25 | INFO  | Task 75f8a76b-6ea2-42d1-99f7-97e14c9e1a7d is in state STARTED 2025-05-28 17:22:25.999504 | orchestrator | 2025-05-28 17:22:25 | INFO  | Task 4fcc0b0c-bcde-4847-b04f-c856fbe593ed is in state STARTED 2025-05-28 17:22:26.000624 | orchestrator | 2025-05-28 17:22:25 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:22:26.000651 | orchestrator | 2025-05-28 17:22:25 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:22:29.071031 | orchestrator | 2025-05-28 17:22:29 | INFO  | Task 75f8a76b-6ea2-42d1-99f7-97e14c9e1a7d is in state STARTED 2025-05-28 17:22:29.071844 | orchestrator | 2025-05-28 17:22:29 | INFO  | Task 4fcc0b0c-bcde-4847-b04f-c856fbe593ed is in state STARTED 2025-05-28 17:22:29.073823 | orchestrator | 2025-05-28 17:22:29 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state STARTED 2025-05-28 17:22:29.074371 | orchestrator | 2025-05-28 17:22:29 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:22:32.123554 | orchestrator | 2025-05-28 17:22:32 | INFO  | Task 75f8a76b-6ea2-42d1-99f7-97e14c9e1a7d is in state STARTED 2025-05-28 17:22:32.125688 | orchestrator | 2025-05-28 17:22:32 | INFO  | Task 4fcc0b0c-bcde-4847-b04f-c856fbe593ed is in state STARTED 2025-05-28 17:22:32.130617 | orchestrator | 2025-05-28 17:22:32 | INFO  | Task 498abbe0-8763-4901-8190-d0026b259450 is in state SUCCESS 2025-05-28 17:22:32.132993 | orchestrator | 2025-05-28 17:22:32.133032 | orchestrator | 2025-05-28 17:22:32.133375 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-05-28 17:22:32.133392 | orchestrator | 2025-05-28 17:22:32.133404 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-05-28 17:22:32.133415 | orchestrator | Wednesday 28 May 2025 17:11:23 +0000 (0:00:00.852) 0:00:00.852 ********* 2025-05-28 17:22:32.133428 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 17:22:32.133440 | orchestrator | 2025-05-28 17:22:32.133452 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-05-28 17:22:32.133463 | orchestrator | Wednesday 28 May 2025 17:11:24 +0000 (0:00:00.986) 0:00:01.839 ********* 2025-05-28 17:22:32.133474 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.133486 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.133497 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.133508 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:22:32.133518 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:22:32.133529 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:22:32.133539 | orchestrator | 2025-05-28 17:22:32.133551 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-05-28 17:22:32.133635 | orchestrator | Wednesday 28 May 2025 17:11:26 +0000 (0:00:01.429) 0:00:03.268 ********* 2025-05-28 17:22:32.133652 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:22:32.133663 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:22:32.133674 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:22:32.133685 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.133696 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.133706 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.133717 | orchestrator | 2025-05-28 17:22:32.133728 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-05-28 17:22:32.133741 | orchestrator | Wednesday 28 May 2025 17:11:27 +0000 (0:00:01.006) 0:00:04.275 ********* 2025-05-28 17:22:32.133752 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:22:32.133764 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:22:32.133774 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:22:32.133785 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.133796 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.133807 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.133818 | orchestrator | 2025-05-28 17:22:32.133829 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-05-28 17:22:32.133840 | orchestrator | Wednesday 28 May 2025 17:11:28 +0000 (0:00:00.978) 0:00:05.253 ********* 2025-05-28 17:22:32.133851 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:22:32.133862 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:22:32.133927 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:22:32.133948 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.134948 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.134966 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.135005 | orchestrator | 2025-05-28 17:22:32.135018 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-05-28 17:22:32.135030 | orchestrator | Wednesday 28 May 2025 17:11:28 +0000 (0:00:00.609) 0:00:05.863 ********* 2025-05-28 17:22:32.135041 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:22:32.135052 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:22:32.135063 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:22:32.135073 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.135084 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.135095 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.135105 | orchestrator | 2025-05-28 17:22:32.135116 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-05-28 17:22:32.135127 | orchestrator | Wednesday 28 May 2025 17:11:29 +0000 (0:00:00.516) 0:00:06.380 ********* 2025-05-28 17:22:32.135138 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:22:32.135763 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:22:32.135777 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:22:32.135788 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.135799 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.135809 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.135820 | orchestrator | 2025-05-28 17:22:32.135830 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-05-28 17:22:32.135842 | orchestrator | Wednesday 28 May 2025 17:11:30 +0000 (0:00:00.897) 0:00:07.277 ********* 2025-05-28 17:22:32.135853 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.135865 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.135876 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.135923 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.135935 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.135946 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.135957 | orchestrator | 2025-05-28 17:22:32.135968 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-05-28 17:22:32.135979 | orchestrator | Wednesday 28 May 2025 17:11:30 +0000 (0:00:00.594) 0:00:07.872 ********* 2025-05-28 17:22:32.135990 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:22:32.136001 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:22:32.136012 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:22:32.136023 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.136033 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.136044 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.136055 | orchestrator | 2025-05-28 17:22:32.136066 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-05-28 17:22:32.136095 | orchestrator | Wednesday 28 May 2025 17:11:31 +0000 (0:00:00.840) 0:00:08.712 ********* 2025-05-28 17:22:32.136107 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-28 17:22:32.136118 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-28 17:22:32.136129 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-28 17:22:32.136153 | orchestrator | 2025-05-28 17:22:32.136165 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-05-28 17:22:32.136175 | orchestrator | Wednesday 28 May 2025 17:11:32 +0000 (0:00:00.781) 0:00:09.493 ********* 2025-05-28 17:22:32.136186 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:22:32.136263 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:22:32.136420 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:22:32.136436 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.136446 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.136457 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.136468 | orchestrator | 2025-05-28 17:22:32.136493 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-05-28 17:22:32.136505 | orchestrator | Wednesday 28 May 2025 17:11:33 +0000 (0:00:01.176) 0:00:10.669 ********* 2025-05-28 17:22:32.136515 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-28 17:22:32.136526 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-28 17:22:32.136551 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-28 17:22:32.136562 | orchestrator | 2025-05-28 17:22:32.136572 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-05-28 17:22:32.136583 | orchestrator | Wednesday 28 May 2025 17:11:36 +0000 (0:00:03.093) 0:00:13.763 ********* 2025-05-28 17:22:32.136594 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-28 17:22:32.136605 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-28 17:22:32.136616 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-28 17:22:32.136638 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.136650 | orchestrator | 2025-05-28 17:22:32.136660 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-05-28 17:22:32.136671 | orchestrator | Wednesday 28 May 2025 17:11:37 +0000 (0:00:00.620) 0:00:14.384 ********* 2025-05-28 17:22:32.136685 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.136700 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.136741 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.136753 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.136764 | orchestrator | 2025-05-28 17:22:32.136775 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-05-28 17:22:32.136785 | orchestrator | Wednesday 28 May 2025 17:11:38 +0000 (0:00:01.282) 0:00:15.667 ********* 2025-05-28 17:22:32.136797 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.136889 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.136901 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.136911 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.136921 | orchestrator | 2025-05-28 17:22:32.136931 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-05-28 17:22:32.136940 | orchestrator | Wednesday 28 May 2025 17:11:38 +0000 (0:00:00.413) 0:00:16.081 ********* 2025-05-28 17:22:32.136960 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-05-28 17:11:34.224016', 'end': '2025-05-28 17:11:34.500059', 'delta': '0:00:00.276043', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.137019 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-05-28 17:11:35.418929', 'end': '2025-05-28 17:11:35.689683', 'delta': '0:00:00.270754', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.137033 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-05-28 17:11:36.235535', 'end': '2025-05-28 17:11:36.499509', 'delta': '0:00:00.263974', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.137044 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.137054 | orchestrator | 2025-05-28 17:22:32.137063 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-05-28 17:22:32.137073 | orchestrator | Wednesday 28 May 2025 17:11:39 +0000 (0:00:00.326) 0:00:16.407 ********* 2025-05-28 17:22:32.137083 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:22:32.137092 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:22:32.137102 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:22:32.137112 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.137121 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.137131 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.137140 | orchestrator | 2025-05-28 17:22:32.137150 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-05-28 17:22:32.137160 | orchestrator | Wednesday 28 May 2025 17:11:41 +0000 (0:00:01.954) 0:00:18.362 ********* 2025-05-28 17:22:32.137169 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:22:32.137179 | orchestrator | 2025-05-28 17:22:32.137188 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-05-28 17:22:32.137198 | orchestrator | Wednesday 28 May 2025 17:11:42 +0000 (0:00:00.796) 0:00:19.158 ********* 2025-05-28 17:22:32.137208 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.137217 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.137227 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.137236 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.137246 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.137255 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.137265 | orchestrator | 2025-05-28 17:22:32.137274 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-05-28 17:22:32.137310 | orchestrator | Wednesday 28 May 2025 17:11:43 +0000 (0:00:01.441) 0:00:20.600 ********* 2025-05-28 17:22:32.137319 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.137329 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.137347 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.137357 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.137366 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.137376 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.137385 | orchestrator | 2025-05-28 17:22:32.137395 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-05-28 17:22:32.137404 | orchestrator | Wednesday 28 May 2025 17:11:45 +0000 (0:00:01.609) 0:00:22.210 ********* 2025-05-28 17:22:32.137414 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.137423 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.137433 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.137442 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.137452 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.137461 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.137471 | orchestrator | 2025-05-28 17:22:32.137480 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-05-28 17:22:32.137490 | orchestrator | Wednesday 28 May 2025 17:11:46 +0000 (0:00:01.260) 0:00:23.470 ********* 2025-05-28 17:22:32.137499 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.137509 | orchestrator | 2025-05-28 17:22:32.137523 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-05-28 17:22:32.137533 | orchestrator | Wednesday 28 May 2025 17:11:46 +0000 (0:00:00.232) 0:00:23.703 ********* 2025-05-28 17:22:32.137542 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.137552 | orchestrator | 2025-05-28 17:22:32.137561 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-05-28 17:22:32.137571 | orchestrator | Wednesday 28 May 2025 17:11:46 +0000 (0:00:00.281) 0:00:23.984 ********* 2025-05-28 17:22:32.137580 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.137590 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.137599 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.137608 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.137618 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.137627 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.137637 | orchestrator | 2025-05-28 17:22:32.137647 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-05-28 17:22:32.137662 | orchestrator | Wednesday 28 May 2025 17:11:47 +0000 (0:00:01.070) 0:00:25.055 ********* 2025-05-28 17:22:32.137672 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.137681 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.137691 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.137700 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.137709 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.137719 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.137728 | orchestrator | 2025-05-28 17:22:32.137738 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-05-28 17:22:32.137748 | orchestrator | Wednesday 28 May 2025 17:11:49 +0000 (0:00:01.368) 0:00:26.424 ********* 2025-05-28 17:22:32.137757 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.137767 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.137776 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.137786 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.137795 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.137804 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.137814 | orchestrator | 2025-05-28 17:22:32.137824 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-05-28 17:22:32.137833 | orchestrator | Wednesday 28 May 2025 17:11:50 +0000 (0:00:01.067) 0:00:27.492 ********* 2025-05-28 17:22:32.137843 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.137852 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.137862 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.137871 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.137880 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.137900 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.137910 | orchestrator | 2025-05-28 17:22:32.137919 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-05-28 17:22:32.137929 | orchestrator | Wednesday 28 May 2025 17:11:51 +0000 (0:00:00.805) 0:00:28.297 ********* 2025-05-28 17:22:32.137938 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.137948 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.137957 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.137966 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.137976 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.137985 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.137995 | orchestrator | 2025-05-28 17:22:32.138004 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-05-28 17:22:32.138014 | orchestrator | Wednesday 28 May 2025 17:11:51 +0000 (0:00:00.556) 0:00:28.854 ********* 2025-05-28 17:22:32.138066 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.138076 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.138085 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.138095 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.138104 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.138114 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.138123 | orchestrator | 2025-05-28 17:22:32.138133 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-05-28 17:22:32.138143 | orchestrator | Wednesday 28 May 2025 17:11:52 +0000 (0:00:00.703) 0:00:29.558 ********* 2025-05-28 17:22:32.138152 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.138162 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.138171 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.138181 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.138190 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.138200 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.138209 | orchestrator | 2025-05-28 17:22:32.138219 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-05-28 17:22:32.138229 | orchestrator | Wednesday 28 May 2025 17:11:53 +0000 (0:00:00.642) 0:00:30.200 ********* 2025-05-28 17:22:32.138240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 17:22:32.138250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 17:22:32.138265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 17:22:32.138295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 17:22:32.138333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 17:22:32.138350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 17:22:32.138368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 17:22:32.138378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 17:22:32.138397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8413bafc-5d5c-45aa-9537-e8a0170ebd39', 'scsi-SQEMU_QEMU_HARDDISK_8413bafc-5d5c-45aa-9537-e8a0170ebd39'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8413bafc-5d5c-45aa-9537-e8a0170ebd39-part1', 'scsi-SQEMU_QEMU_HARDDISK_8413bafc-5d5c-45aa-9537-e8a0170ebd39-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8413bafc-5d5c-45aa-9537-e8a0170ebd39-part14', 'scsi-SQEMU_QEMU_HARDDISK_8413bafc-5d5c-45aa-9537-e8a0170ebd39-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8413bafc-5d5c-45aa-9537-e8a0170ebd39-part15', 'scsi-SQEMU_QEMU_HARDDISK_8413bafc-5d5c-45aa-9537-e8a0170ebd39-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8413bafc-5d5c-45aa-9537-e8a0170ebd39-part16', 'scsi-SQEMU_QEMU_HARDDISK_8413bafc-5d5c-45aa-9537-e8a0170ebd39-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-28 17:22:32.138420 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-28-16-27-15-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-28 17:22:32.138439 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.138449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 17:22:32.138459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 17:22:32.138469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 17:22:32.138478 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 17:22:32.138488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 17:22:32.138498 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 17:22:32.138508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 17:22:32.138522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 17:22:32.138548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4e43ce5-2124-49cc-9590-e2dc33c78c64', 'scsi-SQEMU_QEMU_HARDDISK_f4e43ce5-2124-49cc-9590-e2dc33c78c64'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4e43ce5-2124-49cc-9590-e2dc33c78c64-part1', 'scsi-SQEMU_QEMU_HARDDISK_f4e43ce5-2124-49cc-9590-e2dc33c78c64-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4e43ce5-2124-49cc-9590-e2dc33c78c64-part14', 'scsi-SQEMU_QEMU_HARDDISK_f4e43ce5-2124-49cc-9590-e2dc33c78c64-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4e43ce5-2124-49cc-9590-e2dc33c78c64-part15', 'scsi-SQEMU_QEMU_HARDDISK_f4e43ce5-2124-49cc-9590-e2dc33c78c64-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4e43ce5-2124-49cc-9590-e2dc33c78c64-part16', 'scsi-SQEMU_QEMU_HARDDISK_f4e43ce5-2124-49cc-9590-e2dc33c78c64-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-28 17:22:32.138560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-28-16-27-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-28 17:22:32.138570 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.138579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 17:22:32.138589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 17:22:32.138609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 17:22:32.138626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 17:22:32.138641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 17:22:32.138651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 17:22:32.138661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 17:22:32.138671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 17:22:32.138686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b110ae9-3b24-4117-b531-0e276aed65fb', 'scsi-SQEMU_QEMU_HARDDISK_3b110ae9-3b24-4117-b531-0e276aed65fb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b110ae9-3b24-4117-b531-0e276aed65fb-part1', 'scsi-SQEMU_QEMU_HARDDISK_3b110ae9-3b24-4117-b531-0e276aed65fb-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b110ae9-3b24-4117-b531-0e276aed65fb-part14', 'scsi-SQEMU_QEMU_HARDDISK_3b110ae9-3b24-4117-b531-0e276aed65fb-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b110ae9-3b24-4117-b531-0e276aed65fb-part15', 'scsi-SQEMU_QEMU_HARDDISK_3b110ae9-3b24-4117-b531-0e276aed65fb-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b110ae9-3b24-4117-b531-0e276aed65fb-part16', 'scsi-SQEMU_QEMU_HARDDISK_3b110ae9-3b24-4117-b531-0e276aed65fb-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-28 17:22:32.138708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-28-16-27-22-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-28 17:22:32.138719 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b27f73ed--a290--5ab5--82ba--70ebe910dd97-osd--block--b27f73ed--a290--5ab5--82ba--70ebe910dd97', 'dm-uuid-LVM-9KLnSV2FMdu5smNS3y5wyX3w7ayXNG7y8kFFVylj4M6XQm1D32z3UL9kTpdBpt24'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-28 17:22:32.138729 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fbdc558b--af0f--50ef--b610--4a3c4fb87cac-osd--block--fbdc558b--af0f--50ef--b610--4a3c4fb87cac', 'dm-uuid-LVM-3OcUXFJdZOjxX4MhVM6COoKVtLABKf07UF6CWmNn0ylHpl2JtM11yyjevZteTWOE'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-28 17:22:32.138739 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 17:22:32.138749 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 17:22:32.138759 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 17:22:32.138769 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 17:22:32.138783 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 17:22:32.138800 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 17:22:32.138816 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 17:22:32.138826 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 17:22:32.138837 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e07e7c9-91b0-4ca1-b00e-661089b639c5', 'scsi-SQEMU_QEMU_HARDDISK_3e07e7c9-91b0-4ca1-b00e-661089b639c5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e07e7c9-91b0-4ca1-b00e-661089b639c5-part1', 'scsi-SQEMU_QEMU_HARDDISK_3e07e7c9-91b0-4ca1-b00e-661089b639c5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e07e7c9-91b0-4ca1-b00e-661089b639c5-part14', 'scsi-SQEMU_QEMU_HARDDISK_3e07e7c9-91b0-4ca1-b00e-661089b639c5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e07e7c9-91b0-4ca1-b00e-661089b639c5-part15', 'scsi-SQEMU_QEMU_HARDDISK_3e07e7c9-91b0-4ca1-b00e-661089b639c5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e07e7c9-91b0-4ca1-b00e-661089b639c5-part16', 'scsi-SQEMU_QEMU_HARDDISK_3e07e7c9-91b0-4ca1-b00e-661089b639c5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-28 17:22:32.138847 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.138862 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--b27f73ed--a290--5ab5--82ba--70ebe910dd97-osd--block--b27f73ed--a290--5ab5--82ba--70ebe910dd97'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-d65QUk-DtJC-JGe9-CIIx-PJTJ-W9E2-iJBFyL', 'scsi-0QEMU_QEMU_HARDDISK_da6420c4-4562-42e6-8445-8de06d590092', 'scsi-SQEMU_QEMU_HARDDISK_da6420c4-4562-42e6-8445-8de06d590092'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-28 17:22:32.138885 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--fbdc558b--af0f--50ef--b610--4a3c4fb87cac-osd--block--fbdc558b--af0f--50ef--b610--4a3c4fb87cac'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Torr0x-o6IT-Uhyq-LPgW-VFfl-CEez-PDbgrh', 'scsi-0QEMU_QEMU_HARDDISK_66780fe2-f30a-4cd5-a925-045679329f08', 'scsi-SQEMU_QEMU_HARDDISK_66780fe2-f30a-4cd5-a925-045679329f08'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-28 17:22:32.138896 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_705788e5-cc1d-4d40-94fd-fb0e2f22a483', 'scsi-SQEMU_QEMU_HARDDISK_705788e5-cc1d-4d40-94fd-fb0e2f22a483'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-28 17:22:32.138907 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-28-16-27-20-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-28 17:22:32.138917 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b5b3f734--7a3a--56eb--b9e1--00e08c7f7e25-osd--block--b5b3f734--7a3a--56eb--b9e1--00e08c7f7e25', 'dm-uuid-LVM-LBOmjHRZzCuxZPOQJodwcdTLf69Ofevmg8e2XHQ3Pwz2n2xPxhpILlxcVPgbAlKk'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-28 17:22:32.138927 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7e811d1b--ccc9--571e--beba--983efbae239d-osd--block--7e811d1b--ccc9--571e--beba--983efbae239d', 'dm-uuid-LVM-CAITT3RP6TLMc9HmcMNx0JcxwXriugGpoki7VaPbKtuGl6xe2aNOrqHspFG1X3oT'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-28 17:22:32.138937 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 17:22:32.138960 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 17:22:32.138970 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 17:22:32.138998 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 17:22:32.139008 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.139018 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 17:22:32.139027 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 17:22:32.139037 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 17:22:32.139047 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 17:22:32.139068 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb048e8a-8419-4b97-a2c5-865582781a7c', 'scsi-SQEMU_QEMU_HARDDISK_eb048e8a-8419-4b97-a2c5-865582781a7c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb048e8a-8419-4b97-a2c5-865582781a7c-part1', 'scsi-SQEMU_QEMU_HARDDISK_eb048e8a-8419-4b97-a2c5-865582781a7c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb048e8a-8419-4b97-a2c5-865582781a7c-part14', 'scsi-SQEMU_QEMU_HARDDISK_eb048e8a-8419-4b97-a2c5-865582781a7c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb048e8a-8419-4b97-a2c5-865582781a7c-part15', 'scsi-SQEMU_QEMU_HARDDISK_eb048e8a-8419-4b97-a2c5-865582781a7c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb048e8a-8419-4b97-a2c5-865582781a7c-part16', 'scsi-SQEMU_QEMU_HARDDISK_eb048e8a-8419-4b97-a2c5-865582781a7c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-28 17:22:32.139088 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--b5b3f734--7a3a--56eb--b9e1--00e08c7f7e25-osd--block--b5b3f734--7a3a--56eb--b9e1--00e08c7f7e25'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-aH6NYF-XOTJ-BzO5-wlK5-Wg1X-YPyb-SmFGYl', 'scsi-0QEMU_QEMU_HARDDISK_0444fcd6-ace4-41be-a60f-d61a86741ad0', 'scsi-SQEMU_QEMU_HARDDISK_0444fcd6-ace4-41be-a60f-d61a86741ad0'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-28 17:22:32.139098 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--91f15584--1a8a--582b--a00a--c533bea87f37-osd--block--91f15584--1a8a--582b--a00a--c533bea87f37', 'dm-uuid-LVM-SZ7fUzalikI3yYKAExVeTMfqLzlx29glVO0dFKrypnLKwBHEDds3DU1HwME1nrC4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-28 17:22:32.139109 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--7e811d1b--ccc9--571e--beba--983efbae239d-osd--block--7e811d1b--ccc9--571e--beba--983efbae239d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-oa4YS1-Oof0-xLLq-Kbqf-lN5t-767L-fbWVLa', 'scsi-0QEMU_QEMU_HARDDISK_d5a98c17-e489-4dc0-a000-f021a8d49d4d', 'scsi-SQEMU_QEMU_HARDDISK_d5a98c17-e489-4dc0-a000-f021a8d49d4d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-28 17:22:32.139118 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d85522ca--9ab4--5810--aefe--18d74b0f7dbe-osd--block--d85522ca--9ab4--5810--aefe--18d74b0f7dbe', 'dm-uuid-LVM-AzC3Hw2lyZQrpdA8BrMkmXdWsef6cE9NyBcJfcYWpPONb2VHWS4VHXN4HV8cED63'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-28 17:22:32.139135 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c3ba669b-02ce-4ac9-8d34-f5b1bbc1f6b4', 'scsi-SQEMU_QEMU_HARDDISK_c3ba669b-02ce-4ac9-8d34-f5b1bbc1f6b4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-28 17:22:32.139149 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 17:22:32.139165 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 17:22:32.139176 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-28-16-27-18-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-28 17:22:32.139185 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.139195 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 17:22:32.139205 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 17:22:32.139215 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 17:22:32.139225 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 17:22:32.139240 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 17:22:32.139250 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 17:22:32.139295 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_536d5e59-8868-442a-b439-21fdbfcfc02f', 'scsi-SQEMU_QEMU_HARDDISK_536d5e59-8868-442a-b439-21fdbfcfc02f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_536d5e59-8868-442a-b439-21fdbfcfc02f-part1', 'scsi-SQEMU_QEMU_HARDDISK_536d5e59-8868-442a-b439-21fdbfcfc02f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_536d5e59-8868-442a-b439-21fdbfcfc02f-part14', 'scsi-SQEMU_QEMU_HARDDISK_536d5e59-8868-442a-b439-21fdbfcfc02f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_536d5e59-8868-442a-b439-21fdbfcfc02f-part15', 'scsi-SQEMU_QEMU_HARDDISK_536d5e59-8868-442a-b439-21fdbfcfc02f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_536d5e59-8868-442a-b439-21fdbfcfc02f-part16', 'scsi-SQEMU_QEMU_HARDDISK_536d5e59-8868-442a-b439-21fdbfcfc02f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-28 17:22:32.139339 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--91f15584--1a8a--582b--a00a--c533bea87f37-osd--block--91f15584--1a8a--582b--a00a--c533bea87f37'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-SgwlIF-cvJP-49vP-C19Y-EBRD-SVc4-jUIiXe', 'scsi-0QEMU_QEMU_HARDDISK_1369a208-db5b-4ff3-8df7-c2f8ed8178e8', 'scsi-SQEMU_QEMU_HARDDISK_1369a208-db5b-4ff3-8df7-c2f8ed8178e8'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-28 17:22:32.139358 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--d85522ca--9ab4--5810--aefe--18d74b0f7dbe-osd--block--d85522ca--9ab4--5810--aefe--18d74b0f7dbe'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-vCAuSE-MMAw-D5wt-rZoX-iPtq-UgGK-kpJaQz', 'scsi-0QEMU_QEMU_HARDDISK_3045bd6c-b8ff-4958-af32-f9dea72800f3', 'scsi-SQEMU_QEMU_HARDDISK_3045bd6c-b8ff-4958-af32-f9dea72800f3'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-28 17:22:32.139374 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80beb2a7-6ee1-4917-8c3d-de783739f119', 'scsi-SQEMU_QEMU_HARDDISK_80beb2a7-6ee1-4917-8c3d-de783739f119'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-28 17:22:32.139384 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-28-16-27-14-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-28 17:22:32.139400 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.139411 | orchestrator | 2025-05-28 17:22:32.139421 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-05-28 17:22:32.139431 | orchestrator | Wednesday 28 May 2025 17:11:54 +0000 (0:00:01.683) 0:00:31.884 ********* 2025-05-28 17:22:32.139441 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.139452 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.139462 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.139478 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.139489 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.139503 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.139520 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.139530 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.139541 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8413bafc-5d5c-45aa-9537-e8a0170ebd39', 'scsi-SQEMU_QEMU_HARDDISK_8413bafc-5d5c-45aa-9537-e8a0170ebd39'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8413bafc-5d5c-45aa-9537-e8a0170ebd39-part1', 'scsi-SQEMU_QEMU_HARDDISK_8413bafc-5d5c-45aa-9537-e8a0170ebd39-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8413bafc-5d5c-45aa-9537-e8a0170ebd39-part14', 'scsi-SQEMU_QEMU_HARDDISK_8413bafc-5d5c-45aa-9537-e8a0170ebd39-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8413bafc-5d5c-45aa-9537-e8a0170ebd39-part15', 'scsi-SQEMU_QEMU_HARDDISK_8413bafc-5d5c-45aa-9537-e8a0170ebd39-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8413bafc-5d5c-45aa-9537-e8a0170ebd39-part16', 'scsi-SQEMU_QEMU_HARDDISK_8413bafc-5d5c-45aa-9537-e8a0170ebd39-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.139569 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-28-16-27-15-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.139580 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.139590 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.139600 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.139616 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.139626 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.139636 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.139650 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.139667 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.139678 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.139689 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4e43ce5-2124-49cc-9590-e2dc33c78c64', 'scsi-SQEMU_QEMU_HARDDISK_f4e43ce5-2124-49cc-9590-e2dc33c78c64'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4e43ce5-2124-49cc-9590-e2dc33c78c64-part1', 'scsi-SQEMU_QEMU_HARDDISK_f4e43ce5-2124-49cc-9590-e2dc33c78c64-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4e43ce5-2124-49cc-9590-e2dc33c78c64-part14', 'scsi-SQEMU_QEMU_HARDDISK_f4e43ce5-2124-49cc-9590-e2dc33c78c64-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4e43ce5-2124-49cc-9590-e2dc33c78c64-part15', 'scsi-SQEMU_QEMU_HARDDISK_f4e43ce5-2124-49cc-9590-e2dc33c78c64-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4e43ce5-2124-49cc-9590-e2dc33c78c64-part16', 'scsi-SQEMU_QEMU_HARDDISK_f4e43ce5-2124-49cc-9590-e2dc33c78c64-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.139710 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-28-16-27-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.139726 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.139737 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.139753 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.139763 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.139773 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.139788 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.139804 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.139815 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.139824 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.139835 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b110ae9-3b24-4117-b531-0e276aed65fb', 'scsi-SQEMU_QEMU_HARDDISK_3b110ae9-3b24-4117-b531-0e276aed65fb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b110ae9-3b24-4117-b531-0e276aed65fb-part1', 'scsi-SQEMU_QEMU_HARDDISK_3b110ae9-3b24-4117-b531-0e276aed65fb-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b110ae9-3b24-4117-b531-0e276aed65fb-part14', 'scsi-SQEMU_QEMU_HARDDISK_3b110ae9-3b24-4117-b531-0e276aed65fb-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b110ae9-3b24-4117-b531-0e276aed65fb-part15', 'scsi-SQEMU_QEMU_HARDDISK_3b110ae9-3b24-4117-b531-0e276aed65fb-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b110ae9-3b24-4117-b531-0e276aed65fb-part16', 'scsi-SQEMU_QEMU_HARDDISK_3b110ae9-3b24-4117-b531-0e276aed65fb-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.139856 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-28-16-27-22-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.139873 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b27f73ed--a290--5ab5--82ba--70ebe910dd97-osd--block--b27f73ed--a290--5ab5--82ba--70ebe910dd97', 'dm-uuid-LVM-9KLnSV2FMdu5smNS3y5wyX3w7ayXNG7y8kFFVylj4M6XQm1D32z3UL9kTpdBpt24'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.139885 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fbdc558b--af0f--50ef--b610--4a3c4fb87cac-osd--block--fbdc558b--af0f--50ef--b610--4a3c4fb87cac', 'dm-uuid-LVM-3OcUXFJdZOjxX4MhVM6COoKVtLABKf07UF6CWmNn0ylHpl2JtM11yyjevZteTWOE'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.139904 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.139914 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.139924 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.139935 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.139949 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.139966 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.139977 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.139993 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b5b3f734--7a3a--56eb--b9e1--00e08c7f7e25-osd--block--b5b3f734--7a3a--56eb--b9e1--00e08c7f7e25', 'dm-uuid-LVM-LBOmjHRZzCuxZPOQJodwcdTLf69Ofevmg8e2XHQ3Pwz2n2xPxhpILlxcVPgbAlKk'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.140003 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.140013 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7e811d1b--ccc9--571e--beba--983efbae239d-osd--block--7e811d1b--ccc9--571e--beba--983efbae239d', 'dm-uuid-LVM-CAITT3RP6TLMc9HmcMNx0JcxwXriugGpoki7VaPbKtuGl6xe2aNOrqHspFG1X3oT'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.140027 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.140044 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.140055 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e07e7c9-91b0-4ca1-b00e-661089b639c5', 'scsi-SQEMU_QEMU_HARDDISK_3e07e7c9-91b0-4ca1-b00e-661089b639c5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e07e7c9-91b0-4ca1-b00e-661089b639c5-part1', 'scsi-SQEMU_QEMU_HARDDISK_3e07e7c9-91b0-4ca1-b00e-661089b639c5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e07e7c9-91b0-4ca1-b00e-661089b639c5-part14', 'scsi-SQEMU_QEMU_HARDDISK_3e07e7c9-91b0-4ca1-b00e-661089b639c5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e07e7c9-91b0-4ca1-b00e-661089b639c5-part15', 'scsi-SQEMU_QEMU_HARDDISK_3e07e7c9-91b0-4ca1-b00e-661089b639c5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e07e7c9-91b0-4ca1-b00e-661089b639c5-part16', 'scsi-SQEMU_QEMU_HARDDISK_3e07e7c9-91b0-4ca1-b00e-661089b639c5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.140072 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.140083 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--b27f73ed--a290--5ab5--82ba--70ebe910dd97-osd--block--b27f73ed--a290--5ab5--82ba--70ebe910dd97'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-d65QUk-DtJC-JGe9-CIIx-PJTJ-W9E2-iJBFyL', 'scsi-0QEMU_QEMU_HARDDISK_da6420c4-4562-42e6-8445-8de06d590092', 'scsi-SQEMU_QEMU_HARDDISK_da6420c4-4562-42e6-8445-8de06d590092'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.140101 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.140117 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--fbdc558b--af0f--50ef--b610--4a3c4fb87cac-osd--block--fbdc558b--af0f--50ef--b610--4a3c4fb87cac'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Torr0x-o6IT-Uhyq-LPgW-VFfl-CEez-PDbgrh', 'scsi-0QEMU_QEMU_HARDDISK_66780fe2-f30a-4cd5-a925-045679329f08', 'scsi-SQEMU_QEMU_HARDDISK_66780fe2-f30a-4cd5-a925-045679329f08'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.140127 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipp2025-05-28 17:22:32 | INFO  | Task 37cafdb3-9b68-47a1-a54a-4713396a7016 is in state STARTED 2025-05-28 17:22:32.140139 | orchestrator | 2025-05-28 17:22:32 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:22:32.140235 | orchestrator | ed': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.140257 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_705788e5-cc1d-4d40-94fd-fb0e2f22a483', 'scsi-SQEMU_QEMU_HARDDISK_705788e5-cc1d-4d40-94fd-fb0e2f22a483'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.140272 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.140355 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-28-16-27-20-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.140374 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--91f15584--1a8a--582b--a00a--c533bea87f37-osd--block--91f15584--1a8a--582b--a00a--c533bea87f37', 'dm-uuid-LVM-SZ7fUzalikI3yYKAExVeTMfqLzlx29glVO0dFKrypnLKwBHEDds3DU1HwME1nrC4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.140557 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.140574 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.140584 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d85522ca--9ab4--5810--aefe--18d74b0f7dbe-osd--block--d85522ca--9ab4--5810--aefe--18d74b0f7dbe', 'dm-uuid-LVM-AzC3Hw2lyZQrpdA8BrMkmXdWsef6cE9NyBcJfcYWpPONb2VHWS4VHXN4HV8cED63'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.140592 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.140606 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.140614 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.140629 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.140692 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb048e8a-8419-4b97-a2c5-865582781a7c', 'scsi-SQEMU_QEMU_HARDDISK_eb048e8a-8419-4b97-a2c5-865582781a7c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb048e8a-8419-4b97-a2c5-865582781a7c-part1', 'scsi-SQEMU_QEMU_HARDDISK_eb048e8a-8419-4b97-a2c5-865582781a7c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb048e8a-8419-4b97-a2c5-865582781a7c-part14', 'scsi-SQEMU_QEMU_HARDDISK_eb048e8a-8419-4b97-a2c5-865582781a7c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb048e8a-8419-4b97-a2c5-865582781a7c-part15', 'scsi-SQEMU_QEMU_HARDDISK_eb048e8a-8419-4b97-a2c5-865582781a7c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb048e8a-8419-4b97-a2c5-865582781a7c-part16', 'scsi-SQEMU_QEMU_HARDDISK_eb048e8a-8419-4b97-a2c5-865582781a7c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.140709 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.140724 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--b5b3f734--7a3a--56eb--b9e1--00e08c7f7e25-osd--block--b5b3f734--7a3a--56eb--b9e1--00e08c7f7e25'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-aH6NYF-XOTJ-BzO5-wlK5-Wg1X-YPyb-SmFGYl', 'scsi-0QEMU_QEMU_HARDDISK_0444fcd6-ace4-41be-a60f-d61a86741ad0', 'scsi-SQEMU_QEMU_HARDDISK_0444fcd6-ace4-41be-a60f-d61a86741ad0'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.140733 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.140790 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--7e811d1b--ccc9--571e--beba--983efbae239d-osd--block--7e811d1b--ccc9--571e--beba--983efbae239d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-oa4YS1-Oof0-xLLq-Kbqf-lN5t-767L-fbWVLa', 'scsi-0QEMU_QEMU_HARDDISK_d5a98c17-e489-4dc0-a000-f021a8d49d4d', 'scsi-SQEMU_QEMU_HARDDISK_d5a98c17-e489-4dc0-a000-f021a8d49d4d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.140802 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.140814 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c3ba669b-02ce-4ac9-8d34-f5b1bbc1f6b4', 'scsi-SQEMU_QEMU_HARDDISK_c3ba669b-02ce-4ac9-8d34-f5b1bbc1f6b4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.140830 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.140838 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-28-16-27-18-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.140846 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.140854 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.140911 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.140928 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_536d5e59-8868-442a-b439-21fdbfcfc02f', 'scsi-SQEMU_QEMU_HARDDISK_536d5e59-8868-442a-b439-21fdbfcfc02f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_536d5e59-8868-442a-b439-21fdbfcfc02f-part1', 'scsi-SQEMU_QEMU_HARDDISK_536d5e59-8868-442a-b439-21fdbfcfc02f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_536d5e59-8868-442a-b439-21fdbfcfc02f-part14', 'scsi-SQEMU_QEMU_HARDDISK_536d5e59-8868-442a-b439-21fdbfcfc02f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_536d5e59-8868-442a-b439-21fdbfcfc02f-part15', 'scsi-SQEMU_QEMU_HARDDISK_536d5e59-8868-442a-b439-21fdbfcfc02f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_536d5e59-8868-442a-b439-21fdbfcfc02f-part16', 'scsi-SQEMU_QEMU_HARDDISK_536d5e59-8868-442a-b439-21fdbfcfc02f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.140944 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--91f15584--1a8a--582b--a00a--c533bea87f37-osd--block--91f15584--1a8a--582b--a00a--c533bea87f37'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-SgwlIF-cvJP-49vP-C19Y-EBRD-SVc4-jUIiXe', 'scsi-0QEMU_QEMU_HARDDISK_1369a208-db5b-4ff3-8df7-c2f8ed8178e8', 'scsi-SQEMU_QEMU_HARDDISK_1369a208-db5b-4ff3-8df7-c2f8ed8178e8'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.141026 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--d85522ca--9ab4--5810--aefe--18d74b0f7dbe-osd--block--d85522ca--9ab4--5810--aefe--18d74b0f7dbe'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-vCAuSE-MMAw-D5wt-rZoX-iPtq-UgGK-kpJaQz', 'scsi-0QEMU_QEMU_HARDDISK_3045bd6c-b8ff-4958-af32-f9dea72800f3', 'scsi-SQEMU_QEMU_HARDDISK_3045bd6c-b8ff-4958-af32-f9dea72800f3'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.141040 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80beb2a7-6ee1-4917-8c3d-de783739f119', 'scsi-SQEMU_QEMU_HARDDISK_80beb2a7-6ee1-4917-8c3d-de783739f119'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.141053 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-28-16-27-14-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:22:32.141070 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.141092 | orchestrator | 2025-05-28 17:22:32.141101 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-05-28 17:22:32.141110 | orchestrator | Wednesday 28 May 2025 17:11:57 +0000 (0:00:02.654) 0:00:34.538 ********* 2025-05-28 17:22:32.141119 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:22:32.141128 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:22:32.141137 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:22:32.141145 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.141154 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.141162 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.141170 | orchestrator | 2025-05-28 17:22:32.141179 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-05-28 17:22:32.141187 | orchestrator | Wednesday 28 May 2025 17:11:58 +0000 (0:00:01.081) 0:00:35.620 ********* 2025-05-28 17:22:32.141196 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:22:32.141204 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:22:32.141212 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:22:32.141220 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.141228 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.141237 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.141245 | orchestrator | 2025-05-28 17:22:32.141253 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-05-28 17:22:32.141262 | orchestrator | Wednesday 28 May 2025 17:11:59 +0000 (0:00:00.536) 0:00:36.156 ********* 2025-05-28 17:22:32.141270 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.141301 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.141310 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.141318 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.141326 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.141333 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.141341 | orchestrator | 2025-05-28 17:22:32.141349 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-05-28 17:22:32.141356 | orchestrator | Wednesday 28 May 2025 17:12:00 +0000 (0:00:01.312) 0:00:37.469 ********* 2025-05-28 17:22:32.141364 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.141372 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.141380 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.141387 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.141405 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.141413 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.141421 | orchestrator | 2025-05-28 17:22:32.141428 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-05-28 17:22:32.141437 | orchestrator | Wednesday 28 May 2025 17:12:00 +0000 (0:00:00.564) 0:00:38.034 ********* 2025-05-28 17:22:32.141444 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.141452 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.141460 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.141468 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.141475 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.141483 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.141491 | orchestrator | 2025-05-28 17:22:32.141522 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-05-28 17:22:32.141531 | orchestrator | Wednesday 28 May 2025 17:12:01 +0000 (0:00:00.688) 0:00:38.722 ********* 2025-05-28 17:22:32.141539 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.141553 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.141560 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.141568 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.141576 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.141583 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.141591 | orchestrator | 2025-05-28 17:22:32.141599 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-05-28 17:22:32.141607 | orchestrator | Wednesday 28 May 2025 17:12:02 +0000 (0:00:00.874) 0:00:39.597 ********* 2025-05-28 17:22:32.141615 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-28 17:22:32.141623 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-05-28 17:22:32.141630 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-05-28 17:22:32.141638 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-05-28 17:22:32.141646 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-05-28 17:22:32.141653 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-05-28 17:22:32.141661 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-05-28 17:22:32.141669 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-05-28 17:22:32.141676 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-05-28 17:22:32.141684 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-05-28 17:22:32.141692 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-05-28 17:22:32.141699 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-05-28 17:22:32.141707 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-05-28 17:22:32.141714 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-05-28 17:22:32.141722 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-05-28 17:22:32.141730 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-05-28 17:22:32.141737 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-05-28 17:22:32.141745 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-05-28 17:22:32.141753 | orchestrator | 2025-05-28 17:22:32.141768 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-05-28 17:22:32.141776 | orchestrator | Wednesday 28 May 2025 17:12:05 +0000 (0:00:03.037) 0:00:42.634 ********* 2025-05-28 17:22:32.141784 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-28 17:22:32.141792 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-28 17:22:32.141800 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-28 17:22:32.141808 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.141815 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-28 17:22:32.141823 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-28 17:22:32.141831 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-28 17:22:32.141838 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.141846 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-28 17:22:32.141853 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-28 17:22:32.141861 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-28 17:22:32.141869 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.141877 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-28 17:22:32.141884 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-28 17:22:32.141892 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-28 17:22:32.141900 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.141907 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-28 17:22:32.141915 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-28 17:22:32.141923 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-28 17:22:32.141930 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.141943 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-28 17:22:32.141951 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-28 17:22:32.141959 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-28 17:22:32.141966 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.141974 | orchestrator | 2025-05-28 17:22:32.141982 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-05-28 17:22:32.141990 | orchestrator | Wednesday 28 May 2025 17:12:06 +0000 (0:00:00.623) 0:00:43.257 ********* 2025-05-28 17:22:32.141998 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.142005 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.142013 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.142060 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 17:22:32.142069 | orchestrator | 2025-05-28 17:22:32.142077 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-28 17:22:32.142085 | orchestrator | Wednesday 28 May 2025 17:12:06 +0000 (0:00:00.848) 0:00:44.106 ********* 2025-05-28 17:22:32.142093 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.142101 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.142109 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.142116 | orchestrator | 2025-05-28 17:22:32.142124 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-28 17:22:32.142132 | orchestrator | Wednesday 28 May 2025 17:12:07 +0000 (0:00:00.335) 0:00:44.441 ********* 2025-05-28 17:22:32.142140 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.142148 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.142176 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.142185 | orchestrator | 2025-05-28 17:22:32.142193 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-28 17:22:32.142201 | orchestrator | Wednesday 28 May 2025 17:12:07 +0000 (0:00:00.482) 0:00:44.923 ********* 2025-05-28 17:22:32.142209 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.142217 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.142225 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.142233 | orchestrator | 2025-05-28 17:22:32.142241 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-05-28 17:22:32.142249 | orchestrator | Wednesday 28 May 2025 17:12:08 +0000 (0:00:00.349) 0:00:45.273 ********* 2025-05-28 17:22:32.142256 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.142264 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.142272 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.142300 | orchestrator | 2025-05-28 17:22:32.142313 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-05-28 17:22:32.142327 | orchestrator | Wednesday 28 May 2025 17:12:08 +0000 (0:00:00.364) 0:00:45.637 ********* 2025-05-28 17:22:32.142340 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-28 17:22:32.142352 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-28 17:22:32.142360 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-28 17:22:32.142367 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.142375 | orchestrator | 2025-05-28 17:22:32.142383 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-28 17:22:32.142391 | orchestrator | Wednesday 28 May 2025 17:12:08 +0000 (0:00:00.368) 0:00:46.006 ********* 2025-05-28 17:22:32.142399 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-28 17:22:32.142406 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-28 17:22:32.142414 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-28 17:22:32.142422 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.142429 | orchestrator | 2025-05-28 17:22:32.142437 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-28 17:22:32.142453 | orchestrator | Wednesday 28 May 2025 17:12:09 +0000 (0:00:00.446) 0:00:46.452 ********* 2025-05-28 17:22:32.142461 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-28 17:22:32.142473 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-28 17:22:32.142481 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-28 17:22:32.142489 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.142496 | orchestrator | 2025-05-28 17:22:32.142504 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-05-28 17:22:32.142512 | orchestrator | Wednesday 28 May 2025 17:12:10 +0000 (0:00:01.006) 0:00:47.459 ********* 2025-05-28 17:22:32.142520 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.142527 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.142535 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.142543 | orchestrator | 2025-05-28 17:22:32.142551 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-05-28 17:22:32.142558 | orchestrator | Wednesday 28 May 2025 17:12:11 +0000 (0:00:01.326) 0:00:48.785 ********* 2025-05-28 17:22:32.142566 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-28 17:22:32.142574 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-05-28 17:22:32.142582 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-05-28 17:22:32.142589 | orchestrator | 2025-05-28 17:22:32.142597 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-05-28 17:22:32.142605 | orchestrator | Wednesday 28 May 2025 17:12:12 +0000 (0:00:00.807) 0:00:49.593 ********* 2025-05-28 17:22:32.142612 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-28 17:22:32.142620 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-28 17:22:32.142628 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-28 17:22:32.142636 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-05-28 17:22:32.142644 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-28 17:22:32.142652 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-28 17:22:32.142659 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-28 17:22:32.142667 | orchestrator | 2025-05-28 17:22:32.142675 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-05-28 17:22:32.142683 | orchestrator | Wednesday 28 May 2025 17:12:13 +0000 (0:00:00.784) 0:00:50.377 ********* 2025-05-28 17:22:32.142690 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-28 17:22:32.142698 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-28 17:22:32.142706 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-28 17:22:32.142713 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-05-28 17:22:32.142721 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-28 17:22:32.142729 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-28 17:22:32.142736 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-28 17:22:32.142744 | orchestrator | 2025-05-28 17:22:32.142752 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-05-28 17:22:32.142760 | orchestrator | Wednesday 28 May 2025 17:12:15 +0000 (0:00:02.650) 0:00:53.027 ********* 2025-05-28 17:22:32.142790 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 17:22:32.142799 | orchestrator | 2025-05-28 17:22:32.142807 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-05-28 17:22:32.142821 | orchestrator | Wednesday 28 May 2025 17:12:17 +0000 (0:00:01.230) 0:00:54.258 ********* 2025-05-28 17:22:32.142830 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 17:22:32.142837 | orchestrator | 2025-05-28 17:22:32.142845 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-05-28 17:22:32.142853 | orchestrator | Wednesday 28 May 2025 17:12:18 +0000 (0:00:01.093) 0:00:55.351 ********* 2025-05-28 17:22:32.142861 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.142869 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:22:32.142876 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.142884 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:22:32.142892 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.142899 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:22:32.142907 | orchestrator | 2025-05-28 17:22:32.142915 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-05-28 17:22:32.142922 | orchestrator | Wednesday 28 May 2025 17:12:19 +0000 (0:00:00.765) 0:00:56.117 ********* 2025-05-28 17:22:32.142930 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.142938 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.142945 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.142953 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.142961 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.142969 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.142976 | orchestrator | 2025-05-28 17:22:32.142984 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-05-28 17:22:32.142992 | orchestrator | Wednesday 28 May 2025 17:12:20 +0000 (0:00:01.290) 0:00:57.407 ********* 2025-05-28 17:22:32.142999 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.143007 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.143015 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.143023 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.143030 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.143038 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.143046 | orchestrator | 2025-05-28 17:22:32.143058 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-05-28 17:22:32.143066 | orchestrator | Wednesday 28 May 2025 17:12:21 +0000 (0:00:01.379) 0:00:58.786 ********* 2025-05-28 17:22:32.143074 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.143081 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.143089 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.143097 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.143104 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.143112 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.143120 | orchestrator | 2025-05-28 17:22:32.143128 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-05-28 17:22:32.143136 | orchestrator | Wednesday 28 May 2025 17:12:22 +0000 (0:00:01.268) 0:01:00.055 ********* 2025-05-28 17:22:32.143143 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:22:32.143151 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.143159 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:22:32.143166 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.143174 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:22:32.143182 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.143190 | orchestrator | 2025-05-28 17:22:32.143198 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-05-28 17:22:32.143205 | orchestrator | Wednesday 28 May 2025 17:12:24 +0000 (0:00:01.550) 0:01:01.606 ********* 2025-05-28 17:22:32.143213 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.143221 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.143229 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.143236 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.143244 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.143257 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.143264 | orchestrator | 2025-05-28 17:22:32.143272 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-05-28 17:22:32.143328 | orchestrator | Wednesday 28 May 2025 17:12:25 +0000 (0:00:00.874) 0:01:02.481 ********* 2025-05-28 17:22:32.143336 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.143344 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.143352 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.143359 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.143367 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.143373 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.143380 | orchestrator | 2025-05-28 17:22:32.143387 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-05-28 17:22:32.143393 | orchestrator | Wednesday 28 May 2025 17:12:26 +0000 (0:00:01.108) 0:01:03.589 ********* 2025-05-28 17:22:32.143400 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:22:32.143406 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:22:32.143413 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:22:32.143419 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.143426 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.143432 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.143439 | orchestrator | 2025-05-28 17:22:32.143445 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-05-28 17:22:32.143452 | orchestrator | Wednesday 28 May 2025 17:12:27 +0000 (0:00:01.391) 0:01:04.980 ********* 2025-05-28 17:22:32.143459 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:22:32.143465 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:22:32.143471 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:22:32.143478 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.143484 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.143491 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.143497 | orchestrator | 2025-05-28 17:22:32.143504 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-05-28 17:22:32.143510 | orchestrator | Wednesday 28 May 2025 17:12:29 +0000 (0:00:01.727) 0:01:06.708 ********* 2025-05-28 17:22:32.143517 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.143524 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.143530 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.143537 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.143561 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.143569 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.143576 | orchestrator | 2025-05-28 17:22:32.143582 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-05-28 17:22:32.143589 | orchestrator | Wednesday 28 May 2025 17:12:30 +0000 (0:00:00.701) 0:01:07.410 ********* 2025-05-28 17:22:32.143596 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:22:32.143602 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:22:32.143609 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:22:32.143615 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.143622 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.143628 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.143635 | orchestrator | 2025-05-28 17:22:32.143642 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-05-28 17:22:32.143648 | orchestrator | Wednesday 28 May 2025 17:12:31 +0000 (0:00:01.150) 0:01:08.561 ********* 2025-05-28 17:22:32.143655 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.143661 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.143668 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.143674 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.143681 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.143687 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.143694 | orchestrator | 2025-05-28 17:22:32.143700 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-05-28 17:22:32.143707 | orchestrator | Wednesday 28 May 2025 17:12:32 +0000 (0:00:00.791) 0:01:09.352 ********* 2025-05-28 17:22:32.143719 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.143726 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.143732 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.143739 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.143745 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.143752 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.143758 | orchestrator | 2025-05-28 17:22:32.143765 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-05-28 17:22:32.143771 | orchestrator | Wednesday 28 May 2025 17:12:33 +0000 (0:00:00.899) 0:01:10.252 ********* 2025-05-28 17:22:32.143778 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.143785 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.143791 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.143798 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.143804 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.143811 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.143817 | orchestrator | 2025-05-28 17:22:32.143829 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-05-28 17:22:32.143836 | orchestrator | Wednesday 28 May 2025 17:12:33 +0000 (0:00:00.712) 0:01:10.964 ********* 2025-05-28 17:22:32.143843 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.143849 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.143856 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.143862 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.143869 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.143875 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.143882 | orchestrator | 2025-05-28 17:22:32.143888 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-05-28 17:22:32.143895 | orchestrator | Wednesday 28 May 2025 17:12:34 +0000 (0:00:01.052) 0:01:12.016 ********* 2025-05-28 17:22:32.143901 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.143908 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.143914 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.143921 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.143928 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.143934 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.143941 | orchestrator | 2025-05-28 17:22:32.143947 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-05-28 17:22:32.143954 | orchestrator | Wednesday 28 May 2025 17:12:35 +0000 (0:00:00.679) 0:01:12.696 ********* 2025-05-28 17:22:32.143960 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:22:32.143967 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:22:32.143973 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:22:32.143980 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.143986 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.143993 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.143999 | orchestrator | 2025-05-28 17:22:32.144006 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-05-28 17:22:32.144012 | orchestrator | Wednesday 28 May 2025 17:12:36 +0000 (0:00:00.925) 0:01:13.622 ********* 2025-05-28 17:22:32.144019 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:22:32.144025 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:22:32.144032 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:22:32.144038 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.144044 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.144051 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.144057 | orchestrator | 2025-05-28 17:22:32.144064 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-05-28 17:22:32.144071 | orchestrator | Wednesday 28 May 2025 17:12:37 +0000 (0:00:00.659) 0:01:14.281 ********* 2025-05-28 17:22:32.144077 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:22:32.144084 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:22:32.144090 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:22:32.144097 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.144103 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.144114 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.144121 | orchestrator | 2025-05-28 17:22:32.144128 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-05-28 17:22:32.144134 | orchestrator | Wednesday 28 May 2025 17:12:38 +0000 (0:00:01.162) 0:01:15.444 ********* 2025-05-28 17:22:32.144141 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:22:32.144147 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:22:32.144154 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:22:32.144161 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:22:32.144167 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:22:32.144173 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:22:32.144180 | orchestrator | 2025-05-28 17:22:32.144187 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-05-28 17:22:32.144193 | orchestrator | Wednesday 28 May 2025 17:12:40 +0000 (0:00:01.677) 0:01:17.121 ********* 2025-05-28 17:22:32.144200 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:22:32.144206 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:22:32.144213 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:22:32.144235 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:22:32.144242 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:22:32.144249 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:22:32.144255 | orchestrator | 2025-05-28 17:22:32.144262 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-05-28 17:22:32.144268 | orchestrator | Wednesday 28 May 2025 17:12:41 +0000 (0:00:01.975) 0:01:19.096 ********* 2025-05-28 17:22:32.144291 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 17:22:32.144300 | orchestrator | 2025-05-28 17:22:32.144307 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-05-28 17:22:32.144313 | orchestrator | Wednesday 28 May 2025 17:12:43 +0000 (0:00:01.142) 0:01:20.239 ********* 2025-05-28 17:22:32.144320 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.144326 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.144333 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.144339 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.144346 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.144352 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.144359 | orchestrator | 2025-05-28 17:22:32.144365 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-05-28 17:22:32.144372 | orchestrator | Wednesday 28 May 2025 17:12:43 +0000 (0:00:00.774) 0:01:21.013 ********* 2025-05-28 17:22:32.144379 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.144385 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.144391 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.144398 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.144404 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.144411 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.144417 | orchestrator | 2025-05-28 17:22:32.144424 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-05-28 17:22:32.144430 | orchestrator | Wednesday 28 May 2025 17:12:44 +0000 (0:00:00.530) 0:01:21.543 ********* 2025-05-28 17:22:32.144437 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-28 17:22:32.144444 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-28 17:22:32.144454 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-28 17:22:32.144461 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-28 17:22:32.144467 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-28 17:22:32.144474 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-28 17:22:32.144486 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-28 17:22:32.144493 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-28 17:22:32.144499 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-28 17:22:32.144506 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-28 17:22:32.144512 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-28 17:22:32.144519 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-28 17:22:32.144525 | orchestrator | 2025-05-28 17:22:32.144532 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-05-28 17:22:32.144538 | orchestrator | Wednesday 28 May 2025 17:12:46 +0000 (0:00:01.700) 0:01:23.243 ********* 2025-05-28 17:22:32.144545 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:22:32.144551 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:22:32.144558 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:22:32.144564 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:22:32.144571 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:22:32.144577 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:22:32.144584 | orchestrator | 2025-05-28 17:22:32.144590 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-05-28 17:22:32.144597 | orchestrator | Wednesday 28 May 2025 17:12:47 +0000 (0:00:01.050) 0:01:24.294 ********* 2025-05-28 17:22:32.144603 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.144610 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.144616 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.144623 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.144629 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.144636 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.144642 | orchestrator | 2025-05-28 17:22:32.144649 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-05-28 17:22:32.144655 | orchestrator | Wednesday 28 May 2025 17:12:47 +0000 (0:00:00.800) 0:01:25.095 ********* 2025-05-28 17:22:32.144661 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.144668 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.144674 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.144681 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.144687 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.144693 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.144700 | orchestrator | 2025-05-28 17:22:32.144707 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-05-28 17:22:32.144713 | orchestrator | Wednesday 28 May 2025 17:12:48 +0000 (0:00:00.635) 0:01:25.730 ********* 2025-05-28 17:22:32.144720 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.144726 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.144733 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.144739 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.144745 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.144752 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.144758 | orchestrator | 2025-05-28 17:22:32.144765 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-05-28 17:22:32.144788 | orchestrator | Wednesday 28 May 2025 17:12:49 +0000 (0:00:00.829) 0:01:26.559 ********* 2025-05-28 17:22:32.144797 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 17:22:32.144803 | orchestrator | 2025-05-28 17:22:32.144810 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-05-28 17:22:32.144817 | orchestrator | Wednesday 28 May 2025 17:12:50 +0000 (0:00:01.152) 0:01:27.712 ********* 2025-05-28 17:22:32.144823 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:22:32.144835 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:22:32.144842 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.144848 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.144855 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:22:32.144862 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.144869 | orchestrator | 2025-05-28 17:22:32.144875 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-05-28 17:22:32.144882 | orchestrator | Wednesday 28 May 2025 17:14:00 +0000 (0:01:09.548) 0:02:37.261 ********* 2025-05-28 17:22:32.144889 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-28 17:22:32.144895 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-28 17:22:32.144902 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-28 17:22:32.144908 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.144915 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-28 17:22:32.144922 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-28 17:22:32.144928 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-28 17:22:32.144935 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.144941 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-28 17:22:32.144948 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-28 17:22:32.144954 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-28 17:22:32.144964 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.144971 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-28 17:22:32.144978 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-28 17:22:32.144985 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-28 17:22:32.144991 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.144998 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-28 17:22:32.145004 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-28 17:22:32.145011 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-28 17:22:32.145018 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.145024 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-28 17:22:32.145031 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-28 17:22:32.145037 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-28 17:22:32.145044 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.145050 | orchestrator | 2025-05-28 17:22:32.145057 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-05-28 17:22:32.145064 | orchestrator | Wednesday 28 May 2025 17:14:01 +0000 (0:00:00.917) 0:02:38.179 ********* 2025-05-28 17:22:32.145070 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.145077 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.145083 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.145090 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.145096 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.145103 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.145109 | orchestrator | 2025-05-28 17:22:32.145116 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-05-28 17:22:32.145122 | orchestrator | Wednesday 28 May 2025 17:14:01 +0000 (0:00:00.649) 0:02:38.828 ********* 2025-05-28 17:22:32.145129 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.145135 | orchestrator | 2025-05-28 17:22:32.145142 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-05-28 17:22:32.145154 | orchestrator | Wednesday 28 May 2025 17:14:01 +0000 (0:00:00.133) 0:02:38.962 ********* 2025-05-28 17:22:32.145160 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.145167 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.145174 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.145180 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.145187 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.145193 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.145199 | orchestrator | 2025-05-28 17:22:32.145206 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-05-28 17:22:32.145213 | orchestrator | Wednesday 28 May 2025 17:14:02 +0000 (0:00:01.138) 0:02:40.101 ********* 2025-05-28 17:22:32.145219 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.145225 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.145232 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.145238 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.145245 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.145251 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.145258 | orchestrator | 2025-05-28 17:22:32.145264 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-05-28 17:22:32.145271 | orchestrator | Wednesday 28 May 2025 17:14:03 +0000 (0:00:00.796) 0:02:40.897 ********* 2025-05-28 17:22:32.145313 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.145321 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.145346 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.145353 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.145359 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.145365 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.145371 | orchestrator | 2025-05-28 17:22:32.145377 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-05-28 17:22:32.145384 | orchestrator | Wednesday 28 May 2025 17:14:04 +0000 (0:00:00.947) 0:02:41.845 ********* 2025-05-28 17:22:32.145390 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:22:32.145396 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:22:32.145402 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.145408 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:22:32.145415 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.145421 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.145427 | orchestrator | 2025-05-28 17:22:32.145433 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-05-28 17:22:32.145440 | orchestrator | Wednesday 28 May 2025 17:14:06 +0000 (0:00:02.065) 0:02:43.911 ********* 2025-05-28 17:22:32.145446 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:22:32.145452 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:22:32.145458 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:22:32.145464 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.145470 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.145476 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.145483 | orchestrator | 2025-05-28 17:22:32.145489 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-05-28 17:22:32.145495 | orchestrator | Wednesday 28 May 2025 17:14:07 +0000 (0:00:00.734) 0:02:44.645 ********* 2025-05-28 17:22:32.145501 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 17:22:32.145509 | orchestrator | 2025-05-28 17:22:32.145515 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-05-28 17:22:32.145521 | orchestrator | Wednesday 28 May 2025 17:14:08 +0000 (0:00:00.958) 0:02:45.604 ********* 2025-05-28 17:22:32.145528 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.145534 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.145540 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.145546 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.145552 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.145565 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.145576 | orchestrator | 2025-05-28 17:22:32.145582 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-05-28 17:22:32.145588 | orchestrator | Wednesday 28 May 2025 17:14:09 +0000 (0:00:00.560) 0:02:46.164 ********* 2025-05-28 17:22:32.145594 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.145600 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.145607 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.145613 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.145619 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.145625 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.145631 | orchestrator | 2025-05-28 17:22:32.145637 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-05-28 17:22:32.145644 | orchestrator | Wednesday 28 May 2025 17:14:09 +0000 (0:00:00.812) 0:02:46.976 ********* 2025-05-28 17:22:32.145650 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.145656 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.145662 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.145668 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.145674 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.145680 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.145686 | orchestrator | 2025-05-28 17:22:32.145693 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-05-28 17:22:32.145699 | orchestrator | Wednesday 28 May 2025 17:14:10 +0000 (0:00:00.602) 0:02:47.579 ********* 2025-05-28 17:22:32.145705 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.145711 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.145717 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.145723 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.145729 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.145735 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.145742 | orchestrator | 2025-05-28 17:22:32.145748 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-05-28 17:22:32.145754 | orchestrator | Wednesday 28 May 2025 17:14:11 +0000 (0:00:00.704) 0:02:48.283 ********* 2025-05-28 17:22:32.145760 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.145766 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.145772 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.145778 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.145785 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.145791 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.145797 | orchestrator | 2025-05-28 17:22:32.145803 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-05-28 17:22:32.145809 | orchestrator | Wednesday 28 May 2025 17:14:11 +0000 (0:00:00.556) 0:02:48.839 ********* 2025-05-28 17:22:32.145815 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.145821 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.145827 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.145834 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.145840 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.145846 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.145852 | orchestrator | 2025-05-28 17:22:32.145858 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-05-28 17:22:32.145864 | orchestrator | Wednesday 28 May 2025 17:14:12 +0000 (0:00:00.764) 0:02:49.604 ********* 2025-05-28 17:22:32.145870 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.145877 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.145883 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.145889 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.145895 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.145901 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.145907 | orchestrator | 2025-05-28 17:22:32.145913 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-05-28 17:22:32.145938 | orchestrator | Wednesday 28 May 2025 17:14:13 +0000 (0:00:00.587) 0:02:50.191 ********* 2025-05-28 17:22:32.145945 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.145951 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.145957 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.145963 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.145970 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.145976 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.145982 | orchestrator | 2025-05-28 17:22:32.145988 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-05-28 17:22:32.145994 | orchestrator | Wednesday 28 May 2025 17:14:13 +0000 (0:00:00.759) 0:02:50.950 ********* 2025-05-28 17:22:32.146000 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:22:32.146007 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:22:32.146013 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:22:32.146039 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.146045 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.146052 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.146058 | orchestrator | 2025-05-28 17:22:32.146064 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-05-28 17:22:32.146070 | orchestrator | Wednesday 28 May 2025 17:14:14 +0000 (0:00:01.095) 0:02:52.045 ********* 2025-05-28 17:22:32.146077 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 17:22:32.146083 | orchestrator | 2025-05-28 17:22:32.146089 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-05-28 17:22:32.146096 | orchestrator | Wednesday 28 May 2025 17:14:16 +0000 (0:00:01.090) 0:02:53.136 ********* 2025-05-28 17:22:32.146102 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-05-28 17:22:32.146108 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-05-28 17:22:32.146115 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-05-28 17:22:32.146121 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-05-28 17:22:32.146127 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-05-28 17:22:32.146133 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-05-28 17:22:32.146139 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-05-28 17:22:32.146145 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-05-28 17:22:32.146155 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-05-28 17:22:32.146161 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-05-28 17:22:32.146168 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-05-28 17:22:32.146174 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-05-28 17:22:32.146180 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-05-28 17:22:32.146186 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-05-28 17:22:32.146192 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-05-28 17:22:32.146199 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-05-28 17:22:32.146205 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-05-28 17:22:32.146211 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-05-28 17:22:32.146217 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-05-28 17:22:32.146223 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-05-28 17:22:32.146229 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-05-28 17:22:32.146236 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-05-28 17:22:32.146242 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-05-28 17:22:32.146248 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-05-28 17:22:32.146254 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-05-28 17:22:32.146268 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-05-28 17:22:32.146274 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-05-28 17:22:32.146300 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-05-28 17:22:32.146306 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-05-28 17:22:32.146312 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-05-28 17:22:32.146318 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-05-28 17:22:32.146324 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-05-28 17:22:32.146330 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-05-28 17:22:32.146336 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-05-28 17:22:32.146342 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-05-28 17:22:32.146348 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-05-28 17:22:32.146354 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-05-28 17:22:32.146360 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-05-28 17:22:32.146366 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-05-28 17:22:32.146372 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-05-28 17:22:32.146378 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-05-28 17:22:32.146384 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-05-28 17:22:32.146390 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-05-28 17:22:32.146396 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-05-28 17:22:32.146402 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-05-28 17:22:32.146409 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-05-28 17:22:32.146432 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-05-28 17:22:32.146439 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-05-28 17:22:32.146445 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-28 17:22:32.146451 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-28 17:22:32.146457 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-28 17:22:32.146463 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-28 17:22:32.146470 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-28 17:22:32.146476 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-28 17:22:32.146482 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-28 17:22:32.146488 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-28 17:22:32.146494 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-28 17:22:32.146500 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-28 17:22:32.146506 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-28 17:22:32.146512 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-28 17:22:32.146518 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-28 17:22:32.146524 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-28 17:22:32.146530 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-28 17:22:32.146536 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-28 17:22:32.146542 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-28 17:22:32.146548 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-28 17:22:32.146559 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-28 17:22:32.146569 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-28 17:22:32.146575 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-28 17:22:32.146581 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-28 17:22:32.146587 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-28 17:22:32.146593 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-28 17:22:32.146599 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-28 17:22:32.146605 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-28 17:22:32.146611 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-28 17:22:32.146617 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-28 17:22:32.146623 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-28 17:22:32.146629 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-28 17:22:32.146635 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-28 17:22:32.146641 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-28 17:22:32.146647 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-28 17:22:32.146653 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-28 17:22:32.146659 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-28 17:22:32.146665 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-28 17:22:32.146671 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-05-28 17:22:32.146678 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-05-28 17:22:32.146684 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-05-28 17:22:32.146690 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-05-28 17:22:32.146696 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-05-28 17:22:32.146702 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-05-28 17:22:32.146708 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-05-28 17:22:32.146714 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-05-28 17:22:32.146720 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-05-28 17:22:32.146726 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-05-28 17:22:32.146732 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-05-28 17:22:32.146738 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-05-28 17:22:32.146744 | orchestrator | 2025-05-28 17:22:32.146750 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-28 17:22:32.146756 | orchestrator | Wednesday 28 May 2025 17:14:22 +0000 (0:00:06.144) 0:02:59.280 ********* 2025-05-28 17:22:32.146762 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.146768 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.146774 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.146780 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 17:22:32.146786 | orchestrator | 2025-05-28 17:22:32.146792 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-05-28 17:22:32.146813 | orchestrator | Wednesday 28 May 2025 17:14:23 +0000 (0:00:00.930) 0:03:00.211 ********* 2025-05-28 17:22:32.146820 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-28 17:22:32.146827 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-28 17:22:32.146863 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-28 17:22:32.146870 | orchestrator | 2025-05-28 17:22:32.146876 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-05-28 17:22:32.146882 | orchestrator | Wednesday 28 May 2025 17:14:23 +0000 (0:00:00.731) 0:03:00.942 ********* 2025-05-28 17:22:32.146888 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-28 17:22:32.146895 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-28 17:22:32.146901 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-28 17:22:32.146907 | orchestrator | 2025-05-28 17:22:32.146913 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-05-28 17:22:32.146919 | orchestrator | Wednesday 28 May 2025 17:14:25 +0000 (0:00:01.542) 0:03:02.485 ********* 2025-05-28 17:22:32.146925 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.146931 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.146937 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.146943 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.146949 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.146955 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.146961 | orchestrator | 2025-05-28 17:22:32.146967 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-05-28 17:22:32.146973 | orchestrator | Wednesday 28 May 2025 17:14:25 +0000 (0:00:00.509) 0:03:02.994 ********* 2025-05-28 17:22:32.146984 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.146990 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.146996 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.147002 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.147008 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.147014 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.147020 | orchestrator | 2025-05-28 17:22:32.147026 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-05-28 17:22:32.147032 | orchestrator | Wednesday 28 May 2025 17:14:26 +0000 (0:00:00.652) 0:03:03.647 ********* 2025-05-28 17:22:32.147038 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.147044 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.147050 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.147056 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.147062 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.147068 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.147074 | orchestrator | 2025-05-28 17:22:32.147080 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-05-28 17:22:32.147086 | orchestrator | Wednesday 28 May 2025 17:14:27 +0000 (0:00:00.475) 0:03:04.122 ********* 2025-05-28 17:22:32.147092 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.147098 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.147104 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.147110 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.147116 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.147122 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.147128 | orchestrator | 2025-05-28 17:22:32.147134 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-05-28 17:22:32.147140 | orchestrator | Wednesday 28 May 2025 17:14:27 +0000 (0:00:00.720) 0:03:04.843 ********* 2025-05-28 17:22:32.147146 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.147152 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.147158 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.147164 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.147174 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.147180 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.147186 | orchestrator | 2025-05-28 17:22:32.147192 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-28 17:22:32.147199 | orchestrator | Wednesday 28 May 2025 17:14:28 +0000 (0:00:00.644) 0:03:05.487 ********* 2025-05-28 17:22:32.147205 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.147211 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.147217 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.147223 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.147228 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.147234 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.147240 | orchestrator | 2025-05-28 17:22:32.147247 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-28 17:22:32.147253 | orchestrator | Wednesday 28 May 2025 17:14:29 +0000 (0:00:00.699) 0:03:06.187 ********* 2025-05-28 17:22:32.147259 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.147265 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.147271 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.147293 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.147305 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.147316 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.147325 | orchestrator | 2025-05-28 17:22:32.147335 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-28 17:22:32.147342 | orchestrator | Wednesday 28 May 2025 17:14:29 +0000 (0:00:00.603) 0:03:06.791 ********* 2025-05-28 17:22:32.147348 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.147354 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.147378 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.147385 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.147391 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.147397 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.147403 | orchestrator | 2025-05-28 17:22:32.147409 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-28 17:22:32.147415 | orchestrator | Wednesday 28 May 2025 17:14:30 +0000 (0:00:01.006) 0:03:07.797 ********* 2025-05-28 17:22:32.147422 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.147428 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.147434 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.147440 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.147446 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.147452 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.147458 | orchestrator | 2025-05-28 17:22:32.147464 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-05-28 17:22:32.147470 | orchestrator | Wednesday 28 May 2025 17:14:34 +0000 (0:00:03.952) 0:03:11.750 ********* 2025-05-28 17:22:32.147476 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.147482 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.147488 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.147494 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.147500 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.147506 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.147512 | orchestrator | 2025-05-28 17:22:32.147518 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-05-28 17:22:32.147525 | orchestrator | Wednesday 28 May 2025 17:14:35 +0000 (0:00:00.928) 0:03:12.679 ********* 2025-05-28 17:22:32.147531 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.147537 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.147543 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.147549 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.147555 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.147561 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.147572 | orchestrator | 2025-05-28 17:22:32.147578 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-05-28 17:22:32.147585 | orchestrator | Wednesday 28 May 2025 17:14:36 +0000 (0:00:00.668) 0:03:13.347 ********* 2025-05-28 17:22:32.147591 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.147597 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.147603 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.147609 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.147619 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.147625 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.147631 | orchestrator | 2025-05-28 17:22:32.147637 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-05-28 17:22:32.147643 | orchestrator | Wednesday 28 May 2025 17:14:37 +0000 (0:00:00.829) 0:03:14.177 ********* 2025-05-28 17:22:32.147649 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.147655 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.147661 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.147667 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-28 17:22:32.147673 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-28 17:22:32.147680 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-28 17:22:32.147686 | orchestrator | 2025-05-28 17:22:32.147692 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-05-28 17:22:32.147698 | orchestrator | Wednesday 28 May 2025 17:14:37 +0000 (0:00:00.620) 0:03:14.798 ********* 2025-05-28 17:22:32.147704 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.147710 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.147716 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.147724 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-05-28 17:22:32.147732 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-05-28 17:22:32.147740 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.147746 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-05-28 17:22:32.147752 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-05-28 17:22:32.147758 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.147779 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-05-28 17:22:32.147787 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-05-28 17:22:32.147813 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.147820 | orchestrator | 2025-05-28 17:22:32.147826 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-05-28 17:22:32.147833 | orchestrator | Wednesday 28 May 2025 17:14:38 +0000 (0:00:00.914) 0:03:15.713 ********* 2025-05-28 17:22:32.147839 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.147845 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.147851 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.147857 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.147863 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.147869 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.147875 | orchestrator | 2025-05-28 17:22:32.147881 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-05-28 17:22:32.147887 | orchestrator | Wednesday 28 May 2025 17:14:39 +0000 (0:00:00.681) 0:03:16.395 ********* 2025-05-28 17:22:32.147894 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.147900 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.147906 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.147912 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.147918 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.147924 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.147930 | orchestrator | 2025-05-28 17:22:32.147936 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-28 17:22:32.147943 | orchestrator | Wednesday 28 May 2025 17:14:40 +0000 (0:00:00.842) 0:03:17.237 ********* 2025-05-28 17:22:32.147949 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.147958 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.147964 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.147971 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.147976 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.147983 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.147989 | orchestrator | 2025-05-28 17:22:32.147995 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-28 17:22:32.148001 | orchestrator | Wednesday 28 May 2025 17:14:40 +0000 (0:00:00.659) 0:03:17.896 ********* 2025-05-28 17:22:32.148007 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.148013 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.148019 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.148025 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.148031 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.148037 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.148043 | orchestrator | 2025-05-28 17:22:32.148050 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-28 17:22:32.148056 | orchestrator | Wednesday 28 May 2025 17:14:41 +0000 (0:00:00.916) 0:03:18.812 ********* 2025-05-28 17:22:32.148062 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.148068 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.148074 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.148080 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.148086 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.148092 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.148098 | orchestrator | 2025-05-28 17:22:32.148104 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-05-28 17:22:32.148111 | orchestrator | Wednesday 28 May 2025 17:14:42 +0000 (0:00:00.774) 0:03:19.587 ********* 2025-05-28 17:22:32.148117 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.148123 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.148129 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.148135 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.148141 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.148151 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.148158 | orchestrator | 2025-05-28 17:22:32.148164 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-05-28 17:22:32.148170 | orchestrator | Wednesday 28 May 2025 17:14:43 +0000 (0:00:01.208) 0:03:20.795 ********* 2025-05-28 17:22:32.148176 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-28 17:22:32.148182 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-28 17:22:32.148188 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-28 17:22:32.148194 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.148200 | orchestrator | 2025-05-28 17:22:32.148207 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-28 17:22:32.148213 | orchestrator | Wednesday 28 May 2025 17:14:44 +0000 (0:00:00.416) 0:03:21.212 ********* 2025-05-28 17:22:32.148219 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-28 17:22:32.148225 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-28 17:22:32.148231 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-28 17:22:32.148237 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.148243 | orchestrator | 2025-05-28 17:22:32.148249 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-28 17:22:32.148255 | orchestrator | Wednesday 28 May 2025 17:14:44 +0000 (0:00:00.424) 0:03:21.636 ********* 2025-05-28 17:22:32.148262 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-28 17:22:32.148268 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-28 17:22:32.148274 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-28 17:22:32.148316 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.148323 | orchestrator | 2025-05-28 17:22:32.148346 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-05-28 17:22:32.148353 | orchestrator | Wednesday 28 May 2025 17:14:44 +0000 (0:00:00.376) 0:03:22.013 ********* 2025-05-28 17:22:32.148360 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.148366 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.148372 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.148378 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.148384 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.148390 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.148397 | orchestrator | 2025-05-28 17:22:32.148403 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-05-28 17:22:32.148409 | orchestrator | Wednesday 28 May 2025 17:14:45 +0000 (0:00:00.533) 0:03:22.546 ********* 2025-05-28 17:22:32.148415 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-28 17:22:32.148421 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.148428 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-28 17:22:32.148433 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.148439 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-28 17:22:32.148444 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.148450 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-28 17:22:32.148455 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-05-28 17:22:32.148460 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-05-28 17:22:32.148466 | orchestrator | 2025-05-28 17:22:32.148471 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-05-28 17:22:32.148476 | orchestrator | Wednesday 28 May 2025 17:14:47 +0000 (0:00:01.718) 0:03:24.265 ********* 2025-05-28 17:22:32.148482 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:22:32.148487 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:22:32.148492 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:22:32.148498 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:22:32.148503 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:22:32.148508 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:22:32.148513 | orchestrator | 2025-05-28 17:22:32.148525 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-05-28 17:22:32.148531 | orchestrator | Wednesday 28 May 2025 17:14:49 +0000 (0:00:02.718) 0:03:26.983 ********* 2025-05-28 17:22:32.148536 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:22:32.148542 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:22:32.148547 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:22:32.148556 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:22:32.148561 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:22:32.148567 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:22:32.148572 | orchestrator | 2025-05-28 17:22:32.148577 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-05-28 17:22:32.148583 | orchestrator | Wednesday 28 May 2025 17:14:51 +0000 (0:00:01.209) 0:03:28.193 ********* 2025-05-28 17:22:32.148588 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.148593 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.148599 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.148604 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:22:32.148610 | orchestrator | 2025-05-28 17:22:32.148615 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-05-28 17:22:32.148621 | orchestrator | Wednesday 28 May 2025 17:14:52 +0000 (0:00:01.149) 0:03:29.342 ********* 2025-05-28 17:22:32.148626 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:22:32.148631 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:22:32.148637 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:22:32.148642 | orchestrator | 2025-05-28 17:22:32.148647 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-05-28 17:22:32.148653 | orchestrator | Wednesday 28 May 2025 17:14:52 +0000 (0:00:00.372) 0:03:29.715 ********* 2025-05-28 17:22:32.148658 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:22:32.148664 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:22:32.148669 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:22:32.148674 | orchestrator | 2025-05-28 17:22:32.148680 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-05-28 17:22:32.148685 | orchestrator | Wednesday 28 May 2025 17:14:54 +0000 (0:00:01.811) 0:03:31.527 ********* 2025-05-28 17:22:32.148690 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-28 17:22:32.148696 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-28 17:22:32.148701 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-28 17:22:32.148706 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.148711 | orchestrator | 2025-05-28 17:22:32.148717 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-05-28 17:22:32.148722 | orchestrator | Wednesday 28 May 2025 17:14:55 +0000 (0:00:00.600) 0:03:32.127 ********* 2025-05-28 17:22:32.148728 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:22:32.148733 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:22:32.148738 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:22:32.148744 | orchestrator | 2025-05-28 17:22:32.148749 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-05-28 17:22:32.148754 | orchestrator | Wednesday 28 May 2025 17:14:55 +0000 (0:00:00.361) 0:03:32.489 ********* 2025-05-28 17:22:32.148760 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.148765 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.148770 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.148776 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 17:22:32.148781 | orchestrator | 2025-05-28 17:22:32.148786 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-05-28 17:22:32.148792 | orchestrator | Wednesday 28 May 2025 17:14:56 +0000 (0:00:01.045) 0:03:33.534 ********* 2025-05-28 17:22:32.148797 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-28 17:22:32.148803 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-28 17:22:32.148813 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-28 17:22:32.148818 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.148823 | orchestrator | 2025-05-28 17:22:32.148841 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-05-28 17:22:32.148847 | orchestrator | Wednesday 28 May 2025 17:14:56 +0000 (0:00:00.413) 0:03:33.948 ********* 2025-05-28 17:22:32.148852 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.148858 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.148863 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.148869 | orchestrator | 2025-05-28 17:22:32.148874 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-05-28 17:22:32.148879 | orchestrator | Wednesday 28 May 2025 17:14:57 +0000 (0:00:00.322) 0:03:34.271 ********* 2025-05-28 17:22:32.148885 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.148890 | orchestrator | 2025-05-28 17:22:32.148895 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-05-28 17:22:32.148901 | orchestrator | Wednesday 28 May 2025 17:14:57 +0000 (0:00:00.253) 0:03:34.524 ********* 2025-05-28 17:22:32.148906 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.148911 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.148917 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.148922 | orchestrator | 2025-05-28 17:22:32.148927 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-05-28 17:22:32.148933 | orchestrator | Wednesday 28 May 2025 17:14:57 +0000 (0:00:00.337) 0:03:34.861 ********* 2025-05-28 17:22:32.148938 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.148943 | orchestrator | 2025-05-28 17:22:32.148949 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-05-28 17:22:32.148954 | orchestrator | Wednesday 28 May 2025 17:14:57 +0000 (0:00:00.217) 0:03:35.079 ********* 2025-05-28 17:22:32.148960 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.148965 | orchestrator | 2025-05-28 17:22:32.148970 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-05-28 17:22:32.148976 | orchestrator | Wednesday 28 May 2025 17:14:58 +0000 (0:00:00.211) 0:03:35.290 ********* 2025-05-28 17:22:32.148981 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.148986 | orchestrator | 2025-05-28 17:22:32.148992 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-05-28 17:22:32.148997 | orchestrator | Wednesday 28 May 2025 17:14:58 +0000 (0:00:00.325) 0:03:35.616 ********* 2025-05-28 17:22:32.149002 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.149008 | orchestrator | 2025-05-28 17:22:32.149017 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-05-28 17:22:32.149022 | orchestrator | Wednesday 28 May 2025 17:14:58 +0000 (0:00:00.257) 0:03:35.874 ********* 2025-05-28 17:22:32.149027 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.149033 | orchestrator | 2025-05-28 17:22:32.149038 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-05-28 17:22:32.149044 | orchestrator | Wednesday 28 May 2025 17:14:58 +0000 (0:00:00.227) 0:03:36.101 ********* 2025-05-28 17:22:32.149049 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-28 17:22:32.149054 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-28 17:22:32.149060 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-28 17:22:32.149065 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.149070 | orchestrator | 2025-05-28 17:22:32.149076 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-05-28 17:22:32.149081 | orchestrator | Wednesday 28 May 2025 17:14:59 +0000 (0:00:00.391) 0:03:36.493 ********* 2025-05-28 17:22:32.149087 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.149092 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.149097 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.149108 | orchestrator | 2025-05-28 17:22:32.149113 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-05-28 17:22:32.149119 | orchestrator | Wednesday 28 May 2025 17:14:59 +0000 (0:00:00.304) 0:03:36.798 ********* 2025-05-28 17:22:32.149124 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.149129 | orchestrator | 2025-05-28 17:22:32.149135 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-05-28 17:22:32.149140 | orchestrator | Wednesday 28 May 2025 17:14:59 +0000 (0:00:00.202) 0:03:37.000 ********* 2025-05-28 17:22:32.149146 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.149151 | orchestrator | 2025-05-28 17:22:32.149156 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-05-28 17:22:32.149161 | orchestrator | Wednesday 28 May 2025 17:15:00 +0000 (0:00:00.206) 0:03:37.207 ********* 2025-05-28 17:22:32.149167 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.149172 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.149178 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.149183 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 17:22:32.149188 | orchestrator | 2025-05-28 17:22:32.149194 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-05-28 17:22:32.149199 | orchestrator | Wednesday 28 May 2025 17:15:01 +0000 (0:00:01.044) 0:03:38.251 ********* 2025-05-28 17:22:32.149204 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.149210 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.149215 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.149220 | orchestrator | 2025-05-28 17:22:32.149226 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-05-28 17:22:32.149231 | orchestrator | Wednesday 28 May 2025 17:15:01 +0000 (0:00:00.382) 0:03:38.633 ********* 2025-05-28 17:22:32.149236 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:22:32.149242 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:22:32.149247 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:22:32.149252 | orchestrator | 2025-05-28 17:22:32.149258 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-05-28 17:22:32.149263 | orchestrator | Wednesday 28 May 2025 17:15:02 +0000 (0:00:01.375) 0:03:40.008 ********* 2025-05-28 17:22:32.149269 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-28 17:22:32.149274 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-28 17:22:32.149296 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-28 17:22:32.149315 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.149321 | orchestrator | 2025-05-28 17:22:32.149327 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-05-28 17:22:32.149332 | orchestrator | Wednesday 28 May 2025 17:15:03 +0000 (0:00:01.036) 0:03:41.044 ********* 2025-05-28 17:22:32.149338 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.149343 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.149348 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.149354 | orchestrator | 2025-05-28 17:22:32.149359 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-05-28 17:22:32.149364 | orchestrator | Wednesday 28 May 2025 17:15:04 +0000 (0:00:00.312) 0:03:41.357 ********* 2025-05-28 17:22:32.149370 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.149375 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.149380 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.149386 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 17:22:32.149391 | orchestrator | 2025-05-28 17:22:32.149397 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-05-28 17:22:32.149402 | orchestrator | Wednesday 28 May 2025 17:15:05 +0000 (0:00:00.894) 0:03:42.252 ********* 2025-05-28 17:22:32.149407 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.149413 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.149435 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.149441 | orchestrator | 2025-05-28 17:22:32.149446 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-05-28 17:22:32.149452 | orchestrator | Wednesday 28 May 2025 17:15:05 +0000 (0:00:00.311) 0:03:42.563 ********* 2025-05-28 17:22:32.149457 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:22:32.149463 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:22:32.149468 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:22:32.149473 | orchestrator | 2025-05-28 17:22:32.149479 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-05-28 17:22:32.149484 | orchestrator | Wednesday 28 May 2025 17:15:06 +0000 (0:00:01.174) 0:03:43.738 ********* 2025-05-28 17:22:32.149490 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-28 17:22:32.149495 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-28 17:22:32.149500 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-28 17:22:32.149509 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.149514 | orchestrator | 2025-05-28 17:22:32.149520 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-05-28 17:22:32.149525 | orchestrator | Wednesday 28 May 2025 17:15:07 +0000 (0:00:00.603) 0:03:44.341 ********* 2025-05-28 17:22:32.149530 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.149536 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.149541 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.149546 | orchestrator | 2025-05-28 17:22:32.149552 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-05-28 17:22:32.149557 | orchestrator | Wednesday 28 May 2025 17:15:07 +0000 (0:00:00.227) 0:03:44.569 ********* 2025-05-28 17:22:32.149563 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.149568 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.149573 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.149578 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.149584 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.149589 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.149594 | orchestrator | 2025-05-28 17:22:32.149600 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-05-28 17:22:32.149605 | orchestrator | Wednesday 28 May 2025 17:15:07 +0000 (0:00:00.534) 0:03:45.104 ********* 2025-05-28 17:22:32.149610 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.149616 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.149621 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.149626 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:22:32.149632 | orchestrator | 2025-05-28 17:22:32.149637 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-05-28 17:22:32.149642 | orchestrator | Wednesday 28 May 2025 17:15:08 +0000 (0:00:00.725) 0:03:45.829 ********* 2025-05-28 17:22:32.149648 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:22:32.149653 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:22:32.149658 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:22:32.149664 | orchestrator | 2025-05-28 17:22:32.149669 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-05-28 17:22:32.149675 | orchestrator | Wednesday 28 May 2025 17:15:08 +0000 (0:00:00.273) 0:03:46.103 ********* 2025-05-28 17:22:32.149680 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:22:32.149685 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:22:32.149691 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:22:32.149696 | orchestrator | 2025-05-28 17:22:32.149701 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-05-28 17:22:32.149707 | orchestrator | Wednesday 28 May 2025 17:15:10 +0000 (0:00:01.129) 0:03:47.232 ********* 2025-05-28 17:22:32.149712 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-28 17:22:32.149717 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-28 17:22:32.149726 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-28 17:22:32.149732 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.149737 | orchestrator | 2025-05-28 17:22:32.149742 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-05-28 17:22:32.149748 | orchestrator | Wednesday 28 May 2025 17:15:10 +0000 (0:00:00.796) 0:03:48.029 ********* 2025-05-28 17:22:32.149753 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:22:32.149758 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:22:32.149764 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:22:32.149769 | orchestrator | 2025-05-28 17:22:32.149774 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-05-28 17:22:32.149780 | orchestrator | 2025-05-28 17:22:32.149785 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-05-28 17:22:32.149804 | orchestrator | Wednesday 28 May 2025 17:15:11 +0000 (0:00:00.654) 0:03:48.683 ********* 2025-05-28 17:22:32.149810 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:22:32.149816 | orchestrator | 2025-05-28 17:22:32.149821 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-05-28 17:22:32.149826 | orchestrator | Wednesday 28 May 2025 17:15:12 +0000 (0:00:00.467) 0:03:49.151 ********* 2025-05-28 17:22:32.149832 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:22:32.149837 | orchestrator | 2025-05-28 17:22:32.149842 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-05-28 17:22:32.149848 | orchestrator | Wednesday 28 May 2025 17:15:12 +0000 (0:00:00.640) 0:03:49.791 ********* 2025-05-28 17:22:32.149853 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:22:32.149859 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:22:32.149864 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:22:32.149869 | orchestrator | 2025-05-28 17:22:32.149874 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-05-28 17:22:32.149880 | orchestrator | Wednesday 28 May 2025 17:15:13 +0000 (0:00:00.770) 0:03:50.562 ********* 2025-05-28 17:22:32.149885 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.149891 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.149896 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.149901 | orchestrator | 2025-05-28 17:22:32.149907 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-05-28 17:22:32.149912 | orchestrator | Wednesday 28 May 2025 17:15:13 +0000 (0:00:00.321) 0:03:50.883 ********* 2025-05-28 17:22:32.149917 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.149923 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.149928 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.149933 | orchestrator | 2025-05-28 17:22:32.149939 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-05-28 17:22:32.149944 | orchestrator | Wednesday 28 May 2025 17:15:14 +0000 (0:00:00.310) 0:03:51.194 ********* 2025-05-28 17:22:32.149949 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.149955 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.149960 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.149965 | orchestrator | 2025-05-28 17:22:32.149974 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-05-28 17:22:32.149979 | orchestrator | Wednesday 28 May 2025 17:15:14 +0000 (0:00:00.609) 0:03:51.804 ********* 2025-05-28 17:22:32.149985 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:22:32.149990 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:22:32.149995 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:22:32.150001 | orchestrator | 2025-05-28 17:22:32.150006 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-05-28 17:22:32.150011 | orchestrator | Wednesday 28 May 2025 17:15:15 +0000 (0:00:00.782) 0:03:52.587 ********* 2025-05-28 17:22:32.150036 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.150046 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.150051 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.150056 | orchestrator | 2025-05-28 17:22:32.150062 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-05-28 17:22:32.150067 | orchestrator | Wednesday 28 May 2025 17:15:15 +0000 (0:00:00.329) 0:03:52.917 ********* 2025-05-28 17:22:32.150072 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.150078 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.150083 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.150089 | orchestrator | 2025-05-28 17:22:32.150094 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-05-28 17:22:32.150099 | orchestrator | Wednesday 28 May 2025 17:15:16 +0000 (0:00:00.300) 0:03:53.217 ********* 2025-05-28 17:22:32.150105 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:22:32.150110 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:22:32.150116 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:22:32.150121 | orchestrator | 2025-05-28 17:22:32.150126 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-05-28 17:22:32.150132 | orchestrator | Wednesday 28 May 2025 17:15:17 +0000 (0:00:01.364) 0:03:54.582 ********* 2025-05-28 17:22:32.150138 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:22:32.150143 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:22:32.150148 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:22:32.150153 | orchestrator | 2025-05-28 17:22:32.150159 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-05-28 17:22:32.150164 | orchestrator | Wednesday 28 May 2025 17:15:18 +0000 (0:00:00.777) 0:03:55.359 ********* 2025-05-28 17:22:32.150170 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.150175 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.150180 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.150186 | orchestrator | 2025-05-28 17:22:32.150191 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-05-28 17:22:32.150196 | orchestrator | Wednesday 28 May 2025 17:15:18 +0000 (0:00:00.365) 0:03:55.725 ********* 2025-05-28 17:22:32.150202 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:22:32.150207 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:22:32.150212 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:22:32.150218 | orchestrator | 2025-05-28 17:22:32.150223 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-05-28 17:22:32.150228 | orchestrator | Wednesday 28 May 2025 17:15:19 +0000 (0:00:00.410) 0:03:56.136 ********* 2025-05-28 17:22:32.150234 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.150239 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.150244 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.150250 | orchestrator | 2025-05-28 17:22:32.150255 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-05-28 17:22:32.150260 | orchestrator | Wednesday 28 May 2025 17:15:19 +0000 (0:00:00.727) 0:03:56.863 ********* 2025-05-28 17:22:32.150266 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.150271 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.150292 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.150298 | orchestrator | 2025-05-28 17:22:32.150304 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-05-28 17:22:32.150325 | orchestrator | Wednesday 28 May 2025 17:15:20 +0000 (0:00:00.339) 0:03:57.203 ********* 2025-05-28 17:22:32.150331 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.150337 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.150342 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.150347 | orchestrator | 2025-05-28 17:22:32.150353 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-05-28 17:22:32.150358 | orchestrator | Wednesday 28 May 2025 17:15:20 +0000 (0:00:00.343) 0:03:57.546 ********* 2025-05-28 17:22:32.150363 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.150369 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.150378 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.150383 | orchestrator | 2025-05-28 17:22:32.150389 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-05-28 17:22:32.150394 | orchestrator | Wednesday 28 May 2025 17:15:20 +0000 (0:00:00.477) 0:03:58.024 ********* 2025-05-28 17:22:32.150400 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.150405 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.150410 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.150416 | orchestrator | 2025-05-28 17:22:32.150421 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-05-28 17:22:32.150426 | orchestrator | Wednesday 28 May 2025 17:15:21 +0000 (0:00:00.595) 0:03:58.619 ********* 2025-05-28 17:22:32.150432 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:22:32.150437 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:22:32.150442 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:22:32.150448 | orchestrator | 2025-05-28 17:22:32.150453 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-05-28 17:22:32.150458 | orchestrator | Wednesday 28 May 2025 17:15:21 +0000 (0:00:00.398) 0:03:59.017 ********* 2025-05-28 17:22:32.150464 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:22:32.150469 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:22:32.150474 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:22:32.150480 | orchestrator | 2025-05-28 17:22:32.150485 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-05-28 17:22:32.150491 | orchestrator | Wednesday 28 May 2025 17:15:22 +0000 (0:00:00.348) 0:03:59.366 ********* 2025-05-28 17:22:32.150496 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:22:32.150501 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:22:32.150507 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:22:32.150512 | orchestrator | 2025-05-28 17:22:32.150517 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-05-28 17:22:32.150536 | orchestrator | Wednesday 28 May 2025 17:15:23 +0000 (0:00:00.770) 0:04:00.136 ********* 2025-05-28 17:22:32.150542 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:22:32.150547 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:22:32.150553 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:22:32.150558 | orchestrator | 2025-05-28 17:22:32.150563 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-05-28 17:22:32.150569 | orchestrator | Wednesday 28 May 2025 17:15:23 +0000 (0:00:00.390) 0:04:00.526 ********* 2025-05-28 17:22:32.150574 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:22:32.150580 | orchestrator | 2025-05-28 17:22:32.150585 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-05-28 17:22:32.150590 | orchestrator | Wednesday 28 May 2025 17:15:23 +0000 (0:00:00.579) 0:04:01.106 ********* 2025-05-28 17:22:32.150596 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.150601 | orchestrator | 2025-05-28 17:22:32.150606 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-05-28 17:22:32.150612 | orchestrator | Wednesday 28 May 2025 17:15:24 +0000 (0:00:00.161) 0:04:01.267 ********* 2025-05-28 17:22:32.150617 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-05-28 17:22:32.150622 | orchestrator | 2025-05-28 17:22:32.150628 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-05-28 17:22:32.150633 | orchestrator | Wednesday 28 May 2025 17:15:25 +0000 (0:00:01.762) 0:04:03.029 ********* 2025-05-28 17:22:32.150639 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:22:32.150644 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:22:32.150649 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:22:32.150654 | orchestrator | 2025-05-28 17:22:32.150660 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-05-28 17:22:32.150665 | orchestrator | Wednesday 28 May 2025 17:15:26 +0000 (0:00:00.338) 0:04:03.368 ********* 2025-05-28 17:22:32.150670 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:22:32.150676 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:22:32.150685 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:22:32.150690 | orchestrator | 2025-05-28 17:22:32.150696 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-05-28 17:22:32.150701 | orchestrator | Wednesday 28 May 2025 17:15:26 +0000 (0:00:00.322) 0:04:03.690 ********* 2025-05-28 17:22:32.150707 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:22:32.150712 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:22:32.150717 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:22:32.150723 | orchestrator | 2025-05-28 17:22:32.150728 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-05-28 17:22:32.150733 | orchestrator | Wednesday 28 May 2025 17:15:27 +0000 (0:00:01.355) 0:04:05.045 ********* 2025-05-28 17:22:32.150739 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:22:32.150744 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:22:32.150749 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:22:32.150755 | orchestrator | 2025-05-28 17:22:32.150760 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-05-28 17:22:32.150766 | orchestrator | Wednesday 28 May 2025 17:15:29 +0000 (0:00:01.068) 0:04:06.114 ********* 2025-05-28 17:22:32.150771 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:22:32.150776 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:22:32.150782 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:22:32.150787 | orchestrator | 2025-05-28 17:22:32.150792 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-05-28 17:22:32.150798 | orchestrator | Wednesday 28 May 2025 17:15:29 +0000 (0:00:00.700) 0:04:06.814 ********* 2025-05-28 17:22:32.150803 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:22:32.150808 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:22:32.150814 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:22:32.150819 | orchestrator | 2025-05-28 17:22:32.150837 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-05-28 17:22:32.150843 | orchestrator | Wednesday 28 May 2025 17:15:30 +0000 (0:00:00.702) 0:04:07.517 ********* 2025-05-28 17:22:32.150849 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:22:32.150854 | orchestrator | 2025-05-28 17:22:32.150859 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-05-28 17:22:32.150865 | orchestrator | Wednesday 28 May 2025 17:15:31 +0000 (0:00:01.203) 0:04:08.721 ********* 2025-05-28 17:22:32.150870 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:22:32.150875 | orchestrator | 2025-05-28 17:22:32.150881 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-05-28 17:22:32.150886 | orchestrator | Wednesday 28 May 2025 17:15:32 +0000 (0:00:00.666) 0:04:09.387 ********* 2025-05-28 17:22:32.150891 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-28 17:22:32.150897 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-28 17:22:32.150902 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-28 17:22:32.150908 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-28 17:22:32.150913 | orchestrator | ok: [testbed-node-1] => (item=None) 2025-05-28 17:22:32.150918 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-28 17:22:32.150924 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-28 17:22:32.150929 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2025-05-28 17:22:32.150935 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-28 17:22:32.150940 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2025-05-28 17:22:32.150945 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-05-28 17:22:32.150951 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-05-28 17:22:32.150956 | orchestrator | 2025-05-28 17:22:32.150961 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-05-28 17:22:32.150967 | orchestrator | Wednesday 28 May 2025 17:15:35 +0000 (0:00:03.432) 0:04:12.820 ********* 2025-05-28 17:22:32.150985 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:22:32.150991 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:22:32.150996 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:22:32.151001 | orchestrator | 2025-05-28 17:22:32.151017 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-05-28 17:22:32.151023 | orchestrator | Wednesday 28 May 2025 17:15:37 +0000 (0:00:01.437) 0:04:14.258 ********* 2025-05-28 17:22:32.151029 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:22:32.151034 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:22:32.151039 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:22:32.151045 | orchestrator | 2025-05-28 17:22:32.151050 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-05-28 17:22:32.151055 | orchestrator | Wednesday 28 May 2025 17:15:37 +0000 (0:00:00.323) 0:04:14.581 ********* 2025-05-28 17:22:32.151061 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:22:32.151066 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:22:32.151071 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:22:32.151077 | orchestrator | 2025-05-28 17:22:32.151082 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-05-28 17:22:32.151087 | orchestrator | Wednesday 28 May 2025 17:15:37 +0000 (0:00:00.343) 0:04:14.925 ********* 2025-05-28 17:22:32.151093 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:22:32.151098 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:22:32.151104 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:22:32.151109 | orchestrator | 2025-05-28 17:22:32.151114 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-05-28 17:22:32.151120 | orchestrator | Wednesday 28 May 2025 17:15:39 +0000 (0:00:01.493) 0:04:16.419 ********* 2025-05-28 17:22:32.151125 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:22:32.151130 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:22:32.151136 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:22:32.151141 | orchestrator | 2025-05-28 17:22:32.151146 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-05-28 17:22:32.151152 | orchestrator | Wednesday 28 May 2025 17:15:40 +0000 (0:00:01.381) 0:04:17.800 ********* 2025-05-28 17:22:32.151157 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.151162 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.151168 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.151173 | orchestrator | 2025-05-28 17:22:32.151178 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-05-28 17:22:32.151184 | orchestrator | Wednesday 28 May 2025 17:15:41 +0000 (0:00:00.416) 0:04:18.217 ********* 2025-05-28 17:22:32.151189 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:22:32.151194 | orchestrator | 2025-05-28 17:22:32.151200 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-05-28 17:22:32.151205 | orchestrator | Wednesday 28 May 2025 17:15:41 +0000 (0:00:00.558) 0:04:18.776 ********* 2025-05-28 17:22:32.151210 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.151216 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.151221 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.151226 | orchestrator | 2025-05-28 17:22:32.151232 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-05-28 17:22:32.151237 | orchestrator | Wednesday 28 May 2025 17:15:42 +0000 (0:00:00.537) 0:04:19.313 ********* 2025-05-28 17:22:32.151242 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.151248 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.151253 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.151259 | orchestrator | 2025-05-28 17:22:32.151264 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-05-28 17:22:32.151269 | orchestrator | Wednesday 28 May 2025 17:15:42 +0000 (0:00:00.399) 0:04:19.713 ********* 2025-05-28 17:22:32.151275 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:22:32.151308 | orchestrator | 2025-05-28 17:22:32.151317 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-05-28 17:22:32.151344 | orchestrator | Wednesday 28 May 2025 17:15:43 +0000 (0:00:00.495) 0:04:20.208 ********* 2025-05-28 17:22:32.151351 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:22:32.151356 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:22:32.151362 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:22:32.151367 | orchestrator | 2025-05-28 17:22:32.151372 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-05-28 17:22:32.151378 | orchestrator | Wednesday 28 May 2025 17:15:44 +0000 (0:00:01.856) 0:04:22.064 ********* 2025-05-28 17:22:32.151383 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:22:32.151388 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:22:32.151394 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:22:32.151399 | orchestrator | 2025-05-28 17:22:32.151404 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-05-28 17:22:32.151409 | orchestrator | Wednesday 28 May 2025 17:15:46 +0000 (0:00:01.169) 0:04:23.233 ********* 2025-05-28 17:22:32.151415 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:22:32.151420 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:22:32.151425 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:22:32.151431 | orchestrator | 2025-05-28 17:22:32.151436 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-05-28 17:22:32.151441 | orchestrator | Wednesday 28 May 2025 17:15:47 +0000 (0:00:01.767) 0:04:25.001 ********* 2025-05-28 17:22:32.151446 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:22:32.151452 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:22:32.151457 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:22:32.151462 | orchestrator | 2025-05-28 17:22:32.151468 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-05-28 17:22:32.151473 | orchestrator | Wednesday 28 May 2025 17:15:49 +0000 (0:00:01.956) 0:04:26.958 ********* 2025-05-28 17:22:32.151478 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:22:32.151484 | orchestrator | 2025-05-28 17:22:32.151489 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-05-28 17:22:32.151494 | orchestrator | Wednesday 28 May 2025 17:15:50 +0000 (0:00:00.780) 0:04:27.738 ********* 2025-05-28 17:22:32.151500 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2025-05-28 17:22:32.151505 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:22:32.151510 | orchestrator | 2025-05-28 17:22:32.151519 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-05-28 17:22:32.151524 | orchestrator | Wednesday 28 May 2025 17:16:12 +0000 (0:00:21.758) 0:04:49.497 ********* 2025-05-28 17:22:32.151530 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:22:32.151535 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:22:32.151542 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:22:32.151551 | orchestrator | 2025-05-28 17:22:32.151560 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-05-28 17:22:32.151569 | orchestrator | Wednesday 28 May 2025 17:16:22 +0000 (0:00:10.122) 0:04:59.619 ********* 2025-05-28 17:22:32.151578 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.151586 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.151596 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.151601 | orchestrator | 2025-05-28 17:22:32.151607 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-05-28 17:22:32.151612 | orchestrator | Wednesday 28 May 2025 17:16:23 +0000 (0:00:00.529) 0:05:00.149 ********* 2025-05-28 17:22:32.151619 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__cd2c80c6a80cb49eab3fc074982e206fbcdfc719'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-05-28 17:22:32.151630 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__cd2c80c6a80cb49eab3fc074982e206fbcdfc719'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-05-28 17:22:32.151638 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__cd2c80c6a80cb49eab3fc074982e206fbcdfc719'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-05-28 17:22:32.151644 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__cd2c80c6a80cb49eab3fc074982e206fbcdfc719'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-05-28 17:22:32.151666 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__cd2c80c6a80cb49eab3fc074982e206fbcdfc719'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-05-28 17:22:32.151673 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__cd2c80c6a80cb49eab3fc074982e206fbcdfc719'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__cd2c80c6a80cb49eab3fc074982e206fbcdfc719'}])  2025-05-28 17:22:32.151679 | orchestrator | 2025-05-28 17:22:32.151685 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-05-28 17:22:32.151690 | orchestrator | Wednesday 28 May 2025 17:16:37 +0000 (0:00:14.479) 0:05:14.628 ********* 2025-05-28 17:22:32.151695 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.151701 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.151706 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.151711 | orchestrator | 2025-05-28 17:22:32.151717 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-05-28 17:22:32.151722 | orchestrator | Wednesday 28 May 2025 17:16:37 +0000 (0:00:00.363) 0:05:14.992 ********* 2025-05-28 17:22:32.151727 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:22:32.151733 | orchestrator | 2025-05-28 17:22:32.151738 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-05-28 17:22:32.151748 | orchestrator | Wednesday 28 May 2025 17:16:38 +0000 (0:00:00.754) 0:05:15.747 ********* 2025-05-28 17:22:32.151758 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:22:32.151767 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:22:32.151773 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:22:32.151778 | orchestrator | 2025-05-28 17:22:32.151783 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-05-28 17:22:32.151789 | orchestrator | Wednesday 28 May 2025 17:16:38 +0000 (0:00:00.323) 0:05:16.071 ********* 2025-05-28 17:22:32.151794 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.151803 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.151808 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.151814 | orchestrator | 2025-05-28 17:22:32.151819 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-05-28 17:22:32.151829 | orchestrator | Wednesday 28 May 2025 17:16:39 +0000 (0:00:00.419) 0:05:16.490 ********* 2025-05-28 17:22:32.151834 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-28 17:22:32.151839 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-28 17:22:32.151845 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-28 17:22:32.151850 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.151855 | orchestrator | 2025-05-28 17:22:32.151860 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-05-28 17:22:32.151866 | orchestrator | Wednesday 28 May 2025 17:16:40 +0000 (0:00:00.889) 0:05:17.379 ********* 2025-05-28 17:22:32.151871 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:22:32.151876 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:22:32.151882 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:22:32.151887 | orchestrator | 2025-05-28 17:22:32.151892 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-05-28 17:22:32.151897 | orchestrator | 2025-05-28 17:22:32.151903 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-05-28 17:22:32.151908 | orchestrator | Wednesday 28 May 2025 17:16:41 +0000 (0:00:00.835) 0:05:18.215 ********* 2025-05-28 17:22:32.151913 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:22:32.151919 | orchestrator | 2025-05-28 17:22:32.151924 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-05-28 17:22:32.151929 | orchestrator | Wednesday 28 May 2025 17:16:41 +0000 (0:00:00.550) 0:05:18.766 ********* 2025-05-28 17:22:32.151934 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:22:32.151940 | orchestrator | 2025-05-28 17:22:32.151945 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-05-28 17:22:32.151950 | orchestrator | Wednesday 28 May 2025 17:16:42 +0000 (0:00:00.842) 0:05:19.608 ********* 2025-05-28 17:22:32.151956 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:22:32.151961 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:22:32.151966 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:22:32.151971 | orchestrator | 2025-05-28 17:22:32.151977 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-05-28 17:22:32.152019 | orchestrator | Wednesday 28 May 2025 17:16:43 +0000 (0:00:00.683) 0:05:20.292 ********* 2025-05-28 17:22:32.152025 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.152031 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.152037 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.152042 | orchestrator | 2025-05-28 17:22:32.152047 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-05-28 17:22:32.152053 | orchestrator | Wednesday 28 May 2025 17:16:43 +0000 (0:00:00.281) 0:05:20.573 ********* 2025-05-28 17:22:32.152058 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.152063 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.152069 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.152074 | orchestrator | 2025-05-28 17:22:32.152080 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-05-28 17:22:32.152085 | orchestrator | Wednesday 28 May 2025 17:16:43 +0000 (0:00:00.524) 0:05:21.098 ********* 2025-05-28 17:22:32.152091 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.152096 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.152102 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.152107 | orchestrator | 2025-05-28 17:22:32.152129 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-05-28 17:22:32.152135 | orchestrator | Wednesday 28 May 2025 17:16:44 +0000 (0:00:00.303) 0:05:21.402 ********* 2025-05-28 17:22:32.152140 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:22:32.152146 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:22:32.152151 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:22:32.152157 | orchestrator | 2025-05-28 17:22:32.152180 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-05-28 17:22:32.152185 | orchestrator | Wednesday 28 May 2025 17:16:44 +0000 (0:00:00.690) 0:05:22.092 ********* 2025-05-28 17:22:32.152191 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.152196 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.152202 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.152207 | orchestrator | 2025-05-28 17:22:32.152213 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-05-28 17:22:32.152218 | orchestrator | Wednesday 28 May 2025 17:16:45 +0000 (0:00:00.296) 0:05:22.389 ********* 2025-05-28 17:22:32.152223 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.152229 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.152234 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.152240 | orchestrator | 2025-05-28 17:22:32.152245 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-05-28 17:22:32.152250 | orchestrator | Wednesday 28 May 2025 17:16:45 +0000 (0:00:00.604) 0:05:22.993 ********* 2025-05-28 17:22:32.152256 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:22:32.152261 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:22:32.152266 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:22:32.152272 | orchestrator | 2025-05-28 17:22:32.152312 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-05-28 17:22:32.152318 | orchestrator | Wednesday 28 May 2025 17:16:46 +0000 (0:00:00.668) 0:05:23.662 ********* 2025-05-28 17:22:32.152324 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:22:32.152329 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:22:32.152334 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:22:32.152340 | orchestrator | 2025-05-28 17:22:32.152345 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-05-28 17:22:32.152350 | orchestrator | Wednesday 28 May 2025 17:16:47 +0000 (0:00:00.727) 0:05:24.390 ********* 2025-05-28 17:22:32.152356 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.152361 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.152366 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.152371 | orchestrator | 2025-05-28 17:22:32.152381 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-05-28 17:22:32.152386 | orchestrator | Wednesday 28 May 2025 17:16:47 +0000 (0:00:00.304) 0:05:24.694 ********* 2025-05-28 17:22:32.152392 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:22:32.152397 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:22:32.152402 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:22:32.152408 | orchestrator | 2025-05-28 17:22:32.152413 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-05-28 17:22:32.152418 | orchestrator | Wednesday 28 May 2025 17:16:48 +0000 (0:00:00.609) 0:05:25.304 ********* 2025-05-28 17:22:32.152424 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.152429 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.152434 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.152439 | orchestrator | 2025-05-28 17:22:32.152445 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-05-28 17:22:32.152450 | orchestrator | Wednesday 28 May 2025 17:16:48 +0000 (0:00:00.344) 0:05:25.649 ********* 2025-05-28 17:22:32.152455 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.152461 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.152466 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.152471 | orchestrator | 2025-05-28 17:22:32.152477 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-05-28 17:22:32.152482 | orchestrator | Wednesday 28 May 2025 17:16:48 +0000 (0:00:00.347) 0:05:25.996 ********* 2025-05-28 17:22:32.152487 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.152493 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.152498 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.152503 | orchestrator | 2025-05-28 17:22:32.152509 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-05-28 17:22:32.152522 | orchestrator | Wednesday 28 May 2025 17:16:49 +0000 (0:00:00.417) 0:05:26.414 ********* 2025-05-28 17:22:32.152528 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.152533 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.152538 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.152543 | orchestrator | 2025-05-28 17:22:32.152549 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-05-28 17:22:32.152554 | orchestrator | Wednesday 28 May 2025 17:16:49 +0000 (0:00:00.587) 0:05:27.001 ********* 2025-05-28 17:22:32.152559 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.152565 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.152570 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.152575 | orchestrator | 2025-05-28 17:22:32.152581 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-05-28 17:22:32.152586 | orchestrator | Wednesday 28 May 2025 17:16:50 +0000 (0:00:00.299) 0:05:27.301 ********* 2025-05-28 17:22:32.152591 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:22:32.152597 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:22:32.152602 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:22:32.152608 | orchestrator | 2025-05-28 17:22:32.152613 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-05-28 17:22:32.152618 | orchestrator | Wednesday 28 May 2025 17:16:50 +0000 (0:00:00.361) 0:05:27.662 ********* 2025-05-28 17:22:32.152624 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:22:32.152629 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:22:32.152634 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:22:32.152640 | orchestrator | 2025-05-28 17:22:32.152645 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-05-28 17:22:32.152653 | orchestrator | Wednesday 28 May 2025 17:16:50 +0000 (0:00:00.298) 0:05:27.961 ********* 2025-05-28 17:22:32.152662 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:22:32.152671 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:22:32.152680 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:22:32.152686 | orchestrator | 2025-05-28 17:22:32.152691 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-05-28 17:22:32.152714 | orchestrator | Wednesday 28 May 2025 17:16:51 +0000 (0:00:00.820) 0:05:28.781 ********* 2025-05-28 17:22:32.152721 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-28 17:22:32.152726 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-28 17:22:32.152730 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-28 17:22:32.152735 | orchestrator | 2025-05-28 17:22:32.152740 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-05-28 17:22:32.152745 | orchestrator | Wednesday 28 May 2025 17:16:52 +0000 (0:00:00.620) 0:05:29.402 ********* 2025-05-28 17:22:32.152749 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:22:32.152754 | orchestrator | 2025-05-28 17:22:32.152759 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-05-28 17:22:32.152764 | orchestrator | Wednesday 28 May 2025 17:16:52 +0000 (0:00:00.528) 0:05:29.930 ********* 2025-05-28 17:22:32.152768 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:22:32.152773 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:22:32.152778 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:22:32.152783 | orchestrator | 2025-05-28 17:22:32.152788 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-05-28 17:22:32.152792 | orchestrator | Wednesday 28 May 2025 17:16:53 +0000 (0:00:00.907) 0:05:30.837 ********* 2025-05-28 17:22:32.152797 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.152802 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.152807 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.152811 | orchestrator | 2025-05-28 17:22:32.152816 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-05-28 17:22:32.152826 | orchestrator | Wednesday 28 May 2025 17:16:54 +0000 (0:00:00.293) 0:05:31.131 ********* 2025-05-28 17:22:32.152831 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-28 17:22:32.152836 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-28 17:22:32.152841 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-28 17:22:32.152846 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-05-28 17:22:32.152851 | orchestrator | 2025-05-28 17:22:32.152856 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-05-28 17:22:32.152864 | orchestrator | Wednesday 28 May 2025 17:17:04 +0000 (0:00:10.443) 0:05:41.574 ********* 2025-05-28 17:22:32.152869 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:22:32.152874 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:22:32.152879 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:22:32.152883 | orchestrator | 2025-05-28 17:22:32.152888 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-05-28 17:22:32.152893 | orchestrator | Wednesday 28 May 2025 17:17:04 +0000 (0:00:00.334) 0:05:41.909 ********* 2025-05-28 17:22:32.152898 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-05-28 17:22:32.152902 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-28 17:22:32.152907 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-28 17:22:32.152912 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-05-28 17:22:32.152917 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-28 17:22:32.152922 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-28 17:22:32.152926 | orchestrator | 2025-05-28 17:22:32.152931 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-05-28 17:22:32.152936 | orchestrator | Wednesday 28 May 2025 17:17:07 +0000 (0:00:02.641) 0:05:44.550 ********* 2025-05-28 17:22:32.152941 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-05-28 17:22:32.152946 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-28 17:22:32.152950 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-28 17:22:32.152955 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-28 17:22:32.152960 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-05-28 17:22:32.152965 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-05-28 17:22:32.152969 | orchestrator | 2025-05-28 17:22:32.152974 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-05-28 17:22:32.152979 | orchestrator | Wednesday 28 May 2025 17:17:08 +0000 (0:00:01.169) 0:05:45.720 ********* 2025-05-28 17:22:32.152984 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:22:32.152989 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:22:32.152993 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:22:32.152998 | orchestrator | 2025-05-28 17:22:32.153003 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-05-28 17:22:32.153008 | orchestrator | Wednesday 28 May 2025 17:17:09 +0000 (0:00:00.650) 0:05:46.370 ********* 2025-05-28 17:22:32.153012 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.153017 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.153022 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.153027 | orchestrator | 2025-05-28 17:22:32.153031 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-05-28 17:22:32.153036 | orchestrator | Wednesday 28 May 2025 17:17:09 +0000 (0:00:00.278) 0:05:46.649 ********* 2025-05-28 17:22:32.153041 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.153046 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.153050 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.153055 | orchestrator | 2025-05-28 17:22:32.153060 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-05-28 17:22:32.153065 | orchestrator | Wednesday 28 May 2025 17:17:09 +0000 (0:00:00.282) 0:05:46.932 ********* 2025-05-28 17:22:32.153069 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:22:32.153088 | orchestrator | 2025-05-28 17:22:32.153093 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-05-28 17:22:32.153098 | orchestrator | Wednesday 28 May 2025 17:17:10 +0000 (0:00:00.812) 0:05:47.744 ********* 2025-05-28 17:22:32.153103 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.153120 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.153125 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.153130 | orchestrator | 2025-05-28 17:22:32.153135 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-05-28 17:22:32.153139 | orchestrator | Wednesday 28 May 2025 17:17:10 +0000 (0:00:00.289) 0:05:48.033 ********* 2025-05-28 17:22:32.153144 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.153149 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.153154 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.153159 | orchestrator | 2025-05-28 17:22:32.153163 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-05-28 17:22:32.153168 | orchestrator | Wednesday 28 May 2025 17:17:11 +0000 (0:00:00.291) 0:05:48.325 ********* 2025-05-28 17:22:32.153173 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:22:32.153178 | orchestrator | 2025-05-28 17:22:32.153183 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-05-28 17:22:32.153188 | orchestrator | Wednesday 28 May 2025 17:17:12 +0000 (0:00:00.786) 0:05:49.112 ********* 2025-05-28 17:22:32.153192 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:22:32.153197 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:22:32.153202 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:22:32.153207 | orchestrator | 2025-05-28 17:22:32.153211 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-05-28 17:22:32.153216 | orchestrator | Wednesday 28 May 2025 17:17:13 +0000 (0:00:01.245) 0:05:50.357 ********* 2025-05-28 17:22:32.153221 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:22:32.153226 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:22:32.153230 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:22:32.153235 | orchestrator | 2025-05-28 17:22:32.153240 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-05-28 17:22:32.153245 | orchestrator | Wednesday 28 May 2025 17:17:14 +0000 (0:00:01.119) 0:05:51.477 ********* 2025-05-28 17:22:32.153249 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:22:32.153254 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:22:32.153259 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:22:32.153264 | orchestrator | 2025-05-28 17:22:32.153268 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-05-28 17:22:32.153293 | orchestrator | Wednesday 28 May 2025 17:17:16 +0000 (0:00:01.998) 0:05:53.476 ********* 2025-05-28 17:22:32.153299 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:22:32.153303 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:22:32.153308 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:22:32.153313 | orchestrator | 2025-05-28 17:22:32.153317 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-05-28 17:22:32.153322 | orchestrator | Wednesday 28 May 2025 17:17:18 +0000 (0:00:01.973) 0:05:55.449 ********* 2025-05-28 17:22:32.153327 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.153332 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.153337 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-05-28 17:22:32.153341 | orchestrator | 2025-05-28 17:22:32.153346 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-05-28 17:22:32.153351 | orchestrator | Wednesday 28 May 2025 17:17:18 +0000 (0:00:00.412) 0:05:55.862 ********* 2025-05-28 17:22:32.153355 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-05-28 17:22:32.153360 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-05-28 17:22:32.153371 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-05-28 17:22:32.153375 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-05-28 17:22:32.153380 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2025-05-28 17:22:32.153385 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-05-28 17:22:32.153390 | orchestrator | 2025-05-28 17:22:32.153394 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-05-28 17:22:32.153399 | orchestrator | Wednesday 28 May 2025 17:17:48 +0000 (0:00:30.180) 0:06:26.042 ********* 2025-05-28 17:22:32.153404 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-05-28 17:22:32.153408 | orchestrator | 2025-05-28 17:22:32.153413 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-05-28 17:22:32.153418 | orchestrator | Wednesday 28 May 2025 17:17:50 +0000 (0:00:01.568) 0:06:27.610 ********* 2025-05-28 17:22:32.153422 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:22:32.153427 | orchestrator | 2025-05-28 17:22:32.153432 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-05-28 17:22:32.153436 | orchestrator | Wednesday 28 May 2025 17:17:51 +0000 (0:00:00.830) 0:06:28.441 ********* 2025-05-28 17:22:32.153441 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:22:32.153446 | orchestrator | 2025-05-28 17:22:32.153450 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-05-28 17:22:32.153455 | orchestrator | Wednesday 28 May 2025 17:17:51 +0000 (0:00:00.145) 0:06:28.587 ********* 2025-05-28 17:22:32.153460 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-05-28 17:22:32.153464 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-05-28 17:22:32.153469 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-05-28 17:22:32.153474 | orchestrator | 2025-05-28 17:22:32.153478 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-05-28 17:22:32.153483 | orchestrator | Wednesday 28 May 2025 17:17:57 +0000 (0:00:06.474) 0:06:35.061 ********* 2025-05-28 17:22:32.153488 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-05-28 17:22:32.153505 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-05-28 17:22:32.153511 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-05-28 17:22:32.153515 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-05-28 17:22:32.153520 | orchestrator | 2025-05-28 17:22:32.153525 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-05-28 17:22:32.153530 | orchestrator | Wednesday 28 May 2025 17:18:02 +0000 (0:00:04.854) 0:06:39.916 ********* 2025-05-28 17:22:32.153534 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:22:32.153539 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:22:32.153544 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:22:32.153549 | orchestrator | 2025-05-28 17:22:32.153553 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-05-28 17:22:32.153558 | orchestrator | Wednesday 28 May 2025 17:18:03 +0000 (0:00:00.944) 0:06:40.860 ********* 2025-05-28 17:22:32.153563 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:22:32.153568 | orchestrator | 2025-05-28 17:22:32.153572 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-05-28 17:22:32.153577 | orchestrator | Wednesday 28 May 2025 17:18:04 +0000 (0:00:00.481) 0:06:41.342 ********* 2025-05-28 17:22:32.153582 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:22:32.153587 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:22:32.153591 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:22:32.153600 | orchestrator | 2025-05-28 17:22:32.153605 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-05-28 17:22:32.153610 | orchestrator | Wednesday 28 May 2025 17:18:04 +0000 (0:00:00.311) 0:06:41.654 ********* 2025-05-28 17:22:32.153614 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:22:32.153619 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:22:32.153624 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:22:32.153629 | orchestrator | 2025-05-28 17:22:32.153633 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-05-28 17:22:32.153638 | orchestrator | Wednesday 28 May 2025 17:18:05 +0000 (0:00:01.393) 0:06:43.047 ********* 2025-05-28 17:22:32.153643 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-28 17:22:32.153651 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-28 17:22:32.153656 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-28 17:22:32.153662 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.153671 | orchestrator | 2025-05-28 17:22:32.153680 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-05-28 17:22:32.153687 | orchestrator | Wednesday 28 May 2025 17:18:06 +0000 (0:00:00.621) 0:06:43.669 ********* 2025-05-28 17:22:32.153692 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:22:32.153696 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:22:32.153701 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:22:32.153706 | orchestrator | 2025-05-28 17:22:32.153710 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-05-28 17:22:32.153715 | orchestrator | 2025-05-28 17:22:32.153720 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-05-28 17:22:32.153724 | orchestrator | Wednesday 28 May 2025 17:18:07 +0000 (0:00:00.616) 0:06:44.285 ********* 2025-05-28 17:22:32.153729 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 17:22:32.153734 | orchestrator | 2025-05-28 17:22:32.153739 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-05-28 17:22:32.153743 | orchestrator | Wednesday 28 May 2025 17:18:07 +0000 (0:00:00.736) 0:06:45.021 ********* 2025-05-28 17:22:32.153748 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 17:22:32.153753 | orchestrator | 2025-05-28 17:22:32.153757 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-05-28 17:22:32.153762 | orchestrator | Wednesday 28 May 2025 17:18:08 +0000 (0:00:00.510) 0:06:45.531 ********* 2025-05-28 17:22:32.153767 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.153771 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.153776 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.153781 | orchestrator | 2025-05-28 17:22:32.153785 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-05-28 17:22:32.153790 | orchestrator | Wednesday 28 May 2025 17:18:08 +0000 (0:00:00.288) 0:06:45.820 ********* 2025-05-28 17:22:32.153795 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.153799 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.153804 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.153809 | orchestrator | 2025-05-28 17:22:32.153813 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-05-28 17:22:32.153818 | orchestrator | Wednesday 28 May 2025 17:18:09 +0000 (0:00:00.894) 0:06:46.714 ********* 2025-05-28 17:22:32.153823 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.153827 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.153832 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.153837 | orchestrator | 2025-05-28 17:22:32.153841 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-05-28 17:22:32.153846 | orchestrator | Wednesday 28 May 2025 17:18:10 +0000 (0:00:00.658) 0:06:47.373 ********* 2025-05-28 17:22:32.153851 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.153862 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.153867 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.153871 | orchestrator | 2025-05-28 17:22:32.153876 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-05-28 17:22:32.153881 | orchestrator | Wednesday 28 May 2025 17:18:10 +0000 (0:00:00.653) 0:06:48.026 ********* 2025-05-28 17:22:32.153886 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.153890 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.153895 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.153900 | orchestrator | 2025-05-28 17:22:32.153904 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-05-28 17:22:32.153912 | orchestrator | Wednesday 28 May 2025 17:18:11 +0000 (0:00:00.282) 0:06:48.309 ********* 2025-05-28 17:22:32.153916 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.153921 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.153926 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.153931 | orchestrator | 2025-05-28 17:22:32.153935 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-05-28 17:22:32.153940 | orchestrator | Wednesday 28 May 2025 17:18:11 +0000 (0:00:00.532) 0:06:48.841 ********* 2025-05-28 17:22:32.153945 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.153950 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.153954 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.153959 | orchestrator | 2025-05-28 17:22:32.153964 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-05-28 17:22:32.153969 | orchestrator | Wednesday 28 May 2025 17:18:12 +0000 (0:00:00.299) 0:06:49.141 ********* 2025-05-28 17:22:32.153973 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.153978 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.153983 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.153988 | orchestrator | 2025-05-28 17:22:32.153993 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-05-28 17:22:32.153997 | orchestrator | Wednesday 28 May 2025 17:18:12 +0000 (0:00:00.633) 0:06:49.775 ********* 2025-05-28 17:22:32.154002 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.154007 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.154011 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.154036 | orchestrator | 2025-05-28 17:22:32.154041 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-05-28 17:22:32.154046 | orchestrator | Wednesday 28 May 2025 17:18:13 +0000 (0:00:00.652) 0:06:50.427 ********* 2025-05-28 17:22:32.154051 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.154055 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.154060 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.154065 | orchestrator | 2025-05-28 17:22:32.154070 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-05-28 17:22:32.154075 | orchestrator | Wednesday 28 May 2025 17:18:13 +0000 (0:00:00.546) 0:06:50.974 ********* 2025-05-28 17:22:32.154079 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.154084 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.154089 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.154094 | orchestrator | 2025-05-28 17:22:32.154098 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-05-28 17:22:32.154106 | orchestrator | Wednesday 28 May 2025 17:18:14 +0000 (0:00:00.284) 0:06:51.258 ********* 2025-05-28 17:22:32.154111 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.154116 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.154121 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.154125 | orchestrator | 2025-05-28 17:22:32.154130 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-05-28 17:22:32.154135 | orchestrator | Wednesday 28 May 2025 17:18:14 +0000 (0:00:00.341) 0:06:51.599 ********* 2025-05-28 17:22:32.154140 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.154145 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.154149 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.154158 | orchestrator | 2025-05-28 17:22:32.154163 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-05-28 17:22:32.154167 | orchestrator | Wednesday 28 May 2025 17:18:14 +0000 (0:00:00.329) 0:06:51.929 ********* 2025-05-28 17:22:32.154172 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.154177 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.154182 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.154187 | orchestrator | 2025-05-28 17:22:32.154192 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-05-28 17:22:32.154196 | orchestrator | Wednesday 28 May 2025 17:18:15 +0000 (0:00:00.629) 0:06:52.558 ********* 2025-05-28 17:22:32.154201 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.154206 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.154211 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.154215 | orchestrator | 2025-05-28 17:22:32.154220 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-05-28 17:22:32.154225 | orchestrator | Wednesday 28 May 2025 17:18:15 +0000 (0:00:00.314) 0:06:52.872 ********* 2025-05-28 17:22:32.154230 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.154235 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.154239 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.154244 | orchestrator | 2025-05-28 17:22:32.154249 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-05-28 17:22:32.154254 | orchestrator | Wednesday 28 May 2025 17:18:16 +0000 (0:00:00.319) 0:06:53.192 ********* 2025-05-28 17:22:32.154259 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.154263 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.154268 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.154273 | orchestrator | 2025-05-28 17:22:32.154292 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-05-28 17:22:32.154297 | orchestrator | Wednesday 28 May 2025 17:18:16 +0000 (0:00:00.292) 0:06:53.484 ********* 2025-05-28 17:22:32.154302 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.154307 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.154311 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.154316 | orchestrator | 2025-05-28 17:22:32.154321 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-05-28 17:22:32.154325 | orchestrator | Wednesday 28 May 2025 17:18:16 +0000 (0:00:00.551) 0:06:54.036 ********* 2025-05-28 17:22:32.154330 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.154335 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.154340 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.154344 | orchestrator | 2025-05-28 17:22:32.154349 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-05-28 17:22:32.154354 | orchestrator | Wednesday 28 May 2025 17:18:17 +0000 (0:00:00.496) 0:06:54.532 ********* 2025-05-28 17:22:32.154359 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.154363 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.154368 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.154373 | orchestrator | 2025-05-28 17:22:32.154378 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-05-28 17:22:32.154382 | orchestrator | Wednesday 28 May 2025 17:18:17 +0000 (0:00:00.335) 0:06:54.868 ********* 2025-05-28 17:22:32.154392 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-28 17:22:32.154397 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-28 17:22:32.154402 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-28 17:22:32.154407 | orchestrator | 2025-05-28 17:22:32.154411 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-05-28 17:22:32.154416 | orchestrator | Wednesday 28 May 2025 17:18:18 +0000 (0:00:00.856) 0:06:55.725 ********* 2025-05-28 17:22:32.154421 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 17:22:32.154442 | orchestrator | 2025-05-28 17:22:32.154447 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-05-28 17:22:32.154452 | orchestrator | Wednesday 28 May 2025 17:18:19 +0000 (0:00:00.765) 0:06:56.490 ********* 2025-05-28 17:22:32.154457 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.154461 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.154466 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.154471 | orchestrator | 2025-05-28 17:22:32.154476 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-05-28 17:22:32.154480 | orchestrator | Wednesday 28 May 2025 17:18:19 +0000 (0:00:00.370) 0:06:56.861 ********* 2025-05-28 17:22:32.154485 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.154490 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.154495 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.154499 | orchestrator | 2025-05-28 17:22:32.154504 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-05-28 17:22:32.154508 | orchestrator | Wednesday 28 May 2025 17:18:20 +0000 (0:00:00.325) 0:06:57.187 ********* 2025-05-28 17:22:32.154513 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.154518 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.154523 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.154527 | orchestrator | 2025-05-28 17:22:32.154532 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-05-28 17:22:32.154537 | orchestrator | Wednesday 28 May 2025 17:18:21 +0000 (0:00:00.962) 0:06:58.149 ********* 2025-05-28 17:22:32.154541 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.154546 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.154551 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.154555 | orchestrator | 2025-05-28 17:22:32.154563 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-05-28 17:22:32.154568 | orchestrator | Wednesday 28 May 2025 17:18:21 +0000 (0:00:00.379) 0:06:58.529 ********* 2025-05-28 17:22:32.154573 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-05-28 17:22:32.154577 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-05-28 17:22:32.154582 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-05-28 17:22:32.154587 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-05-28 17:22:32.154592 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-05-28 17:22:32.154596 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-05-28 17:22:32.154601 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-05-28 17:22:32.154606 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-05-28 17:22:32.154610 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-05-28 17:22:32.154615 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-05-28 17:22:32.154620 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-05-28 17:22:32.154624 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-05-28 17:22:32.154629 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-05-28 17:22:32.154634 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-05-28 17:22:32.154638 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-05-28 17:22:32.154643 | orchestrator | 2025-05-28 17:22:32.154648 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-05-28 17:22:32.154652 | orchestrator | Wednesday 28 May 2025 17:18:24 +0000 (0:00:03.094) 0:07:01.624 ********* 2025-05-28 17:22:32.154661 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.154666 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.154671 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.154675 | orchestrator | 2025-05-28 17:22:32.154680 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-05-28 17:22:32.154685 | orchestrator | Wednesday 28 May 2025 17:18:24 +0000 (0:00:00.306) 0:07:01.931 ********* 2025-05-28 17:22:32.154690 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 17:22:32.154694 | orchestrator | 2025-05-28 17:22:32.154699 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-05-28 17:22:32.154703 | orchestrator | Wednesday 28 May 2025 17:18:25 +0000 (0:00:00.760) 0:07:02.691 ********* 2025-05-28 17:22:32.154708 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-05-28 17:22:32.154713 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-05-28 17:22:32.154718 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-05-28 17:22:32.154725 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-05-28 17:22:32.154730 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-05-28 17:22:32.154735 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-05-28 17:22:32.154739 | orchestrator | 2025-05-28 17:22:32.154744 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-05-28 17:22:32.154749 | orchestrator | Wednesday 28 May 2025 17:18:26 +0000 (0:00:00.907) 0:07:03.598 ********* 2025-05-28 17:22:32.154754 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-28 17:22:32.154758 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-28 17:22:32.154763 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-28 17:22:32.154768 | orchestrator | 2025-05-28 17:22:32.154773 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-05-28 17:22:32.154778 | orchestrator | Wednesday 28 May 2025 17:18:28 +0000 (0:00:01.886) 0:07:05.484 ********* 2025-05-28 17:22:32.154782 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-28 17:22:32.154787 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-28 17:22:32.154792 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:22:32.154797 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-28 17:22:32.154801 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-28 17:22:32.154806 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:22:32.154811 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-28 17:22:32.154815 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-28 17:22:32.154820 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:22:32.154825 | orchestrator | 2025-05-28 17:22:32.154829 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-05-28 17:22:32.154834 | orchestrator | Wednesday 28 May 2025 17:18:29 +0000 (0:00:01.344) 0:07:06.829 ********* 2025-05-28 17:22:32.154839 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-28 17:22:32.154844 | orchestrator | 2025-05-28 17:22:32.154848 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-05-28 17:22:32.154853 | orchestrator | Wednesday 28 May 2025 17:18:31 +0000 (0:00:02.019) 0:07:08.848 ********* 2025-05-28 17:22:32.154867 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 17:22:32.154872 | orchestrator | 2025-05-28 17:22:32.154877 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-05-28 17:22:32.154882 | orchestrator | Wednesday 28 May 2025 17:18:32 +0000 (0:00:00.544) 0:07:09.393 ********* 2025-05-28 17:22:32.154887 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-b27f73ed-a290-5ab5-82ba-70ebe910dd97', 'data_vg': 'ceph-b27f73ed-a290-5ab5-82ba-70ebe910dd97'}) 2025-05-28 17:22:32.154896 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-b5b3f734-7a3a-56eb-b9e1-00e08c7f7e25', 'data_vg': 'ceph-b5b3f734-7a3a-56eb-b9e1-00e08c7f7e25'}) 2025-05-28 17:22:32.154901 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-91f15584-1a8a-582b-a00a-c533bea87f37', 'data_vg': 'ceph-91f15584-1a8a-582b-a00a-c533bea87f37'}) 2025-05-28 17:22:32.154906 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-d85522ca-9ab4-5810-aefe-18d74b0f7dbe', 'data_vg': 'ceph-d85522ca-9ab4-5810-aefe-18d74b0f7dbe'}) 2025-05-28 17:22:32.154911 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-7e811d1b-ccc9-571e-beba-983efbae239d', 'data_vg': 'ceph-7e811d1b-ccc9-571e-beba-983efbae239d'}) 2025-05-28 17:22:32.154916 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-fbdc558b-af0f-50ef-b610-4a3c4fb87cac', 'data_vg': 'ceph-fbdc558b-af0f-50ef-b610-4a3c4fb87cac'}) 2025-05-28 17:22:32.154920 | orchestrator | 2025-05-28 17:22:32.154925 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-05-28 17:22:32.154930 | orchestrator | Wednesday 28 May 2025 17:19:11 +0000 (0:00:39.459) 0:07:48.852 ********* 2025-05-28 17:22:32.154935 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.154939 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.154944 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.154949 | orchestrator | 2025-05-28 17:22:32.154954 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-05-28 17:22:32.154958 | orchestrator | Wednesday 28 May 2025 17:19:12 +0000 (0:00:00.534) 0:07:49.387 ********* 2025-05-28 17:22:32.154963 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 17:22:32.154968 | orchestrator | 2025-05-28 17:22:32.154973 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-05-28 17:22:32.154977 | orchestrator | Wednesday 28 May 2025 17:19:12 +0000 (0:00:00.532) 0:07:49.919 ********* 2025-05-28 17:22:32.154982 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.154987 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.154992 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.154996 | orchestrator | 2025-05-28 17:22:32.155001 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-05-28 17:22:32.155006 | orchestrator | Wednesday 28 May 2025 17:19:13 +0000 (0:00:00.633) 0:07:50.552 ********* 2025-05-28 17:22:32.155010 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.155015 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.155020 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.155025 | orchestrator | 2025-05-28 17:22:32.155029 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-05-28 17:22:32.155034 | orchestrator | Wednesday 28 May 2025 17:19:16 +0000 (0:00:02.924) 0:07:53.477 ********* 2025-05-28 17:22:32.155041 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 17:22:32.155046 | orchestrator | 2025-05-28 17:22:32.155051 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-05-28 17:22:32.155056 | orchestrator | Wednesday 28 May 2025 17:19:16 +0000 (0:00:00.501) 0:07:53.978 ********* 2025-05-28 17:22:32.155061 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:22:32.155066 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:22:32.155070 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:22:32.155075 | orchestrator | 2025-05-28 17:22:32.155080 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-05-28 17:22:32.155085 | orchestrator | Wednesday 28 May 2025 17:19:18 +0000 (0:00:01.152) 0:07:55.131 ********* 2025-05-28 17:22:32.155090 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:22:32.155094 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:22:32.155099 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:22:32.155104 | orchestrator | 2025-05-28 17:22:32.155109 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-05-28 17:22:32.155118 | orchestrator | Wednesday 28 May 2025 17:19:19 +0000 (0:00:01.352) 0:07:56.483 ********* 2025-05-28 17:22:32.155123 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:22:32.155128 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:22:32.155133 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:22:32.155137 | orchestrator | 2025-05-28 17:22:32.155142 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-05-28 17:22:32.155147 | orchestrator | Wednesday 28 May 2025 17:19:21 +0000 (0:00:01.769) 0:07:58.252 ********* 2025-05-28 17:22:32.155152 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.155156 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.155161 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.155166 | orchestrator | 2025-05-28 17:22:32.155170 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-05-28 17:22:32.155175 | orchestrator | Wednesday 28 May 2025 17:19:21 +0000 (0:00:00.330) 0:07:58.583 ********* 2025-05-28 17:22:32.155180 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.155185 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.155189 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.155194 | orchestrator | 2025-05-28 17:22:32.155199 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-05-28 17:22:32.155204 | orchestrator | Wednesday 28 May 2025 17:19:21 +0000 (0:00:00.310) 0:07:58.894 ********* 2025-05-28 17:22:32.155211 | orchestrator | ok: [testbed-node-3] => (item=3) 2025-05-28 17:22:32.155216 | orchestrator | ok: [testbed-node-4] => (item=5) 2025-05-28 17:22:32.155221 | orchestrator | ok: [testbed-node-5] => (item=2) 2025-05-28 17:22:32.155226 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-28 17:22:32.155230 | orchestrator | ok: [testbed-node-4] => (item=1) 2025-05-28 17:22:32.155235 | orchestrator | ok: [testbed-node-5] => (item=4) 2025-05-28 17:22:32.155240 | orchestrator | 2025-05-28 17:22:32.155244 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-05-28 17:22:32.155249 | orchestrator | Wednesday 28 May 2025 17:19:23 +0000 (0:00:01.223) 0:08:00.117 ********* 2025-05-28 17:22:32.155254 | orchestrator | changed: [testbed-node-3] => (item=3) 2025-05-28 17:22:32.155259 | orchestrator | changed: [testbed-node-4] => (item=5) 2025-05-28 17:22:32.155264 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-05-28 17:22:32.155269 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-05-28 17:22:32.155273 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-05-28 17:22:32.155311 | orchestrator | changed: [testbed-node-5] => (item=4) 2025-05-28 17:22:32.155316 | orchestrator | 2025-05-28 17:22:32.155321 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-05-28 17:22:32.155325 | orchestrator | Wednesday 28 May 2025 17:19:25 +0000 (0:00:02.119) 0:08:02.236 ********* 2025-05-28 17:22:32.155330 | orchestrator | changed: [testbed-node-3] => (item=3) 2025-05-28 17:22:32.155335 | orchestrator | changed: [testbed-node-4] => (item=5) 2025-05-28 17:22:32.155339 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-05-28 17:22:32.155344 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-05-28 17:22:32.155349 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-05-28 17:22:32.155353 | orchestrator | changed: [testbed-node-5] => (item=4) 2025-05-28 17:22:32.155358 | orchestrator | 2025-05-28 17:22:32.155363 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-05-28 17:22:32.155367 | orchestrator | Wednesday 28 May 2025 17:19:28 +0000 (0:00:03.519) 0:08:05.756 ********* 2025-05-28 17:22:32.155372 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.155377 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.155381 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-05-28 17:22:32.155386 | orchestrator | 2025-05-28 17:22:32.155391 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-05-28 17:22:32.155395 | orchestrator | Wednesday 28 May 2025 17:19:32 +0000 (0:00:03.440) 0:08:09.197 ********* 2025-05-28 17:22:32.155400 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.155409 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.155413 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-05-28 17:22:32.155418 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-05-28 17:22:32.155423 | orchestrator | 2025-05-28 17:22:32.155428 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-05-28 17:22:32.155432 | orchestrator | Wednesday 28 May 2025 17:19:44 +0000 (0:00:12.847) 0:08:22.044 ********* 2025-05-28 17:22:32.155437 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.155442 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.155446 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.155451 | orchestrator | 2025-05-28 17:22:32.155456 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-05-28 17:22:32.155461 | orchestrator | Wednesday 28 May 2025 17:19:45 +0000 (0:00:00.851) 0:08:22.896 ********* 2025-05-28 17:22:32.155465 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.155470 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.155475 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.155479 | orchestrator | 2025-05-28 17:22:32.155487 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-05-28 17:22:32.155492 | orchestrator | Wednesday 28 May 2025 17:19:46 +0000 (0:00:00.583) 0:08:23.480 ********* 2025-05-28 17:22:32.155497 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 17:22:32.155501 | orchestrator | 2025-05-28 17:22:32.155506 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-05-28 17:22:32.155511 | orchestrator | Wednesday 28 May 2025 17:19:46 +0000 (0:00:00.518) 0:08:23.998 ********* 2025-05-28 17:22:32.155516 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-28 17:22:32.155520 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-28 17:22:32.155525 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-28 17:22:32.155530 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.155535 | orchestrator | 2025-05-28 17:22:32.155539 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-05-28 17:22:32.155544 | orchestrator | Wednesday 28 May 2025 17:19:47 +0000 (0:00:00.377) 0:08:24.376 ********* 2025-05-28 17:22:32.155549 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.155554 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.155558 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.155563 | orchestrator | 2025-05-28 17:22:32.155568 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-05-28 17:22:32.155572 | orchestrator | Wednesday 28 May 2025 17:19:47 +0000 (0:00:00.283) 0:08:24.660 ********* 2025-05-28 17:22:32.155577 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.155582 | orchestrator | 2025-05-28 17:22:32.155587 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-05-28 17:22:32.155591 | orchestrator | Wednesday 28 May 2025 17:19:47 +0000 (0:00:00.234) 0:08:24.895 ********* 2025-05-28 17:22:32.155596 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.155601 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.155605 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.155610 | orchestrator | 2025-05-28 17:22:32.155615 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-05-28 17:22:32.155620 | orchestrator | Wednesday 28 May 2025 17:19:48 +0000 (0:00:00.608) 0:08:25.503 ********* 2025-05-28 17:22:32.155624 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.155629 | orchestrator | 2025-05-28 17:22:32.155638 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-05-28 17:22:32.155642 | orchestrator | Wednesday 28 May 2025 17:19:48 +0000 (0:00:00.215) 0:08:25.719 ********* 2025-05-28 17:22:32.155647 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.155655 | orchestrator | 2025-05-28 17:22:32.155660 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-05-28 17:22:32.155665 | orchestrator | Wednesday 28 May 2025 17:19:48 +0000 (0:00:00.234) 0:08:25.954 ********* 2025-05-28 17:22:32.155670 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.155674 | orchestrator | 2025-05-28 17:22:32.155679 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-05-28 17:22:32.155684 | orchestrator | Wednesday 28 May 2025 17:19:48 +0000 (0:00:00.117) 0:08:26.071 ********* 2025-05-28 17:22:32.155689 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.155693 | orchestrator | 2025-05-28 17:22:32.155698 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-05-28 17:22:32.155703 | orchestrator | Wednesday 28 May 2025 17:19:49 +0000 (0:00:00.207) 0:08:26.279 ********* 2025-05-28 17:22:32.155707 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.155712 | orchestrator | 2025-05-28 17:22:32.155717 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-05-28 17:22:32.155722 | orchestrator | Wednesday 28 May 2025 17:19:49 +0000 (0:00:00.226) 0:08:26.506 ********* 2025-05-28 17:22:32.155726 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-28 17:22:32.155731 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-28 17:22:32.155736 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-28 17:22:32.155740 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.155745 | orchestrator | 2025-05-28 17:22:32.155750 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-05-28 17:22:32.155755 | orchestrator | Wednesday 28 May 2025 17:19:49 +0000 (0:00:00.382) 0:08:26.888 ********* 2025-05-28 17:22:32.155759 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.155764 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.155769 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.155774 | orchestrator | 2025-05-28 17:22:32.155778 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-05-28 17:22:32.155783 | orchestrator | Wednesday 28 May 2025 17:19:50 +0000 (0:00:00.284) 0:08:27.173 ********* 2025-05-28 17:22:32.155787 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.155791 | orchestrator | 2025-05-28 17:22:32.155796 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-05-28 17:22:32.155800 | orchestrator | Wednesday 28 May 2025 17:19:50 +0000 (0:00:00.728) 0:08:27.901 ********* 2025-05-28 17:22:32.155805 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.155809 | orchestrator | 2025-05-28 17:22:32.155814 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-05-28 17:22:32.155818 | orchestrator | 2025-05-28 17:22:32.155822 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-05-28 17:22:32.155827 | orchestrator | Wednesday 28 May 2025 17:19:51 +0000 (0:00:00.648) 0:08:28.549 ********* 2025-05-28 17:22:32.155832 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 17:22:32.155837 | orchestrator | 2025-05-28 17:22:32.155841 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-05-28 17:22:32.155846 | orchestrator | Wednesday 28 May 2025 17:19:52 +0000 (0:00:01.245) 0:08:29.795 ********* 2025-05-28 17:22:32.155853 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 17:22:32.155857 | orchestrator | 2025-05-28 17:22:32.155862 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-05-28 17:22:32.155866 | orchestrator | Wednesday 28 May 2025 17:19:53 +0000 (0:00:01.199) 0:08:30.995 ********* 2025-05-28 17:22:32.155871 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.155875 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.155922 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:22:32.155928 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:22:32.155932 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:22:32.155937 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.155941 | orchestrator | 2025-05-28 17:22:32.155946 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-05-28 17:22:32.155950 | orchestrator | Wednesday 28 May 2025 17:19:54 +0000 (0:00:00.816) 0:08:31.811 ********* 2025-05-28 17:22:32.155955 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.155959 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.155964 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.155968 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.155973 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.155977 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.155982 | orchestrator | 2025-05-28 17:22:32.155986 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-05-28 17:22:32.155991 | orchestrator | Wednesday 28 May 2025 17:19:55 +0000 (0:00:00.944) 0:08:32.756 ********* 2025-05-28 17:22:32.155995 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.156000 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.156004 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.156009 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.156013 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.156018 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.156022 | orchestrator | 2025-05-28 17:22:32.156027 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-05-28 17:22:32.156031 | orchestrator | Wednesday 28 May 2025 17:19:56 +0000 (0:00:01.226) 0:08:33.983 ********* 2025-05-28 17:22:32.156036 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.156040 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.156045 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.156049 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.156065 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.156070 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.156075 | orchestrator | 2025-05-28 17:22:32.156080 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-05-28 17:22:32.156084 | orchestrator | Wednesday 28 May 2025 17:19:57 +0000 (0:00:00.962) 0:08:34.945 ********* 2025-05-28 17:22:32.156089 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.156093 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.156097 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:22:32.156102 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:22:32.156106 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:22:32.156111 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.156115 | orchestrator | 2025-05-28 17:22:32.156120 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-05-28 17:22:32.156124 | orchestrator | Wednesday 28 May 2025 17:19:58 +0000 (0:00:00.806) 0:08:35.752 ********* 2025-05-28 17:22:32.156129 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.156133 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.156138 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.156142 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.156147 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.156151 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.156155 | orchestrator | 2025-05-28 17:22:32.156160 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-05-28 17:22:32.156165 | orchestrator | Wednesday 28 May 2025 17:19:59 +0000 (0:00:00.581) 0:08:36.334 ********* 2025-05-28 17:22:32.156169 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.156173 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.156178 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.156182 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.156187 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.156191 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.156195 | orchestrator | 2025-05-28 17:22:32.156208 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-05-28 17:22:32.156212 | orchestrator | Wednesday 28 May 2025 17:20:00 +0000 (0:00:00.780) 0:08:37.114 ********* 2025-05-28 17:22:32.156217 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:22:32.156221 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:22:32.156226 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:22:32.156230 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.156234 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.156239 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.156243 | orchestrator | 2025-05-28 17:22:32.156248 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-05-28 17:22:32.156252 | orchestrator | Wednesday 28 May 2025 17:20:01 +0000 (0:00:01.014) 0:08:38.128 ********* 2025-05-28 17:22:32.156257 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:22:32.156261 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:22:32.156266 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:22:32.156270 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.156274 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.156291 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.156296 | orchestrator | 2025-05-28 17:22:32.156300 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-05-28 17:22:32.156305 | orchestrator | Wednesday 28 May 2025 17:20:02 +0000 (0:00:01.445) 0:08:39.573 ********* 2025-05-28 17:22:32.156309 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.156314 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.156318 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.156323 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.156327 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.156332 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.156336 | orchestrator | 2025-05-28 17:22:32.156340 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-05-28 17:22:32.156345 | orchestrator | Wednesday 28 May 2025 17:20:03 +0000 (0:00:00.605) 0:08:40.179 ********* 2025-05-28 17:22:32.156350 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:22:32.156357 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:22:32.156362 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:22:32.156366 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.156371 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.156375 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.156380 | orchestrator | 2025-05-28 17:22:32.156384 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-05-28 17:22:32.156389 | orchestrator | Wednesday 28 May 2025 17:20:03 +0000 (0:00:00.771) 0:08:40.951 ********* 2025-05-28 17:22:32.156393 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.156398 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.156402 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.156407 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.156411 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.156416 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.156420 | orchestrator | 2025-05-28 17:22:32.156425 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-05-28 17:22:32.156429 | orchestrator | Wednesday 28 May 2025 17:20:04 +0000 (0:00:00.603) 0:08:41.554 ********* 2025-05-28 17:22:32.156434 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.156438 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.156443 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.156447 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.156452 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.156456 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.156461 | orchestrator | 2025-05-28 17:22:32.156465 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-05-28 17:22:32.156470 | orchestrator | Wednesday 28 May 2025 17:20:05 +0000 (0:00:00.801) 0:08:42.356 ********* 2025-05-28 17:22:32.156474 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.156483 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.156487 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.156492 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.156496 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.156501 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.156505 | orchestrator | 2025-05-28 17:22:32.156510 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-05-28 17:22:32.156514 | orchestrator | Wednesday 28 May 2025 17:20:05 +0000 (0:00:00.599) 0:08:42.955 ********* 2025-05-28 17:22:32.156519 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.156523 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.156528 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.156535 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.156540 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.156544 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.156549 | orchestrator | 2025-05-28 17:22:32.156553 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-05-28 17:22:32.156558 | orchestrator | Wednesday 28 May 2025 17:20:06 +0000 (0:00:00.865) 0:08:43.821 ********* 2025-05-28 17:22:32.156562 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:22:32.156567 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:22:32.156571 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:22:32.156576 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.156580 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.156585 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.156589 | orchestrator | 2025-05-28 17:22:32.156594 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-05-28 17:22:32.156598 | orchestrator | Wednesday 28 May 2025 17:20:07 +0000 (0:00:00.589) 0:08:44.411 ********* 2025-05-28 17:22:32.156603 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:22:32.156607 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:22:32.156612 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:22:32.156616 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.156621 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.156625 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.156630 | orchestrator | 2025-05-28 17:22:32.156634 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-05-28 17:22:32.156639 | orchestrator | Wednesday 28 May 2025 17:20:08 +0000 (0:00:00.766) 0:08:45.177 ********* 2025-05-28 17:22:32.156643 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:22:32.156648 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:22:32.156652 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:22:32.156657 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.156661 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.156666 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.156670 | orchestrator | 2025-05-28 17:22:32.156675 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-05-28 17:22:32.156679 | orchestrator | Wednesday 28 May 2025 17:20:08 +0000 (0:00:00.627) 0:08:45.805 ********* 2025-05-28 17:22:32.156684 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:22:32.156688 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:22:32.156693 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:22:32.156697 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.156701 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.156706 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.156710 | orchestrator | 2025-05-28 17:22:32.156715 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-05-28 17:22:32.156719 | orchestrator | Wednesday 28 May 2025 17:20:09 +0000 (0:00:01.259) 0:08:47.065 ********* 2025-05-28 17:22:32.156724 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:22:32.156728 | orchestrator | 2025-05-28 17:22:32.156733 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-05-28 17:22:32.156737 | orchestrator | Wednesday 28 May 2025 17:20:13 +0000 (0:00:03.962) 0:08:51.027 ********* 2025-05-28 17:22:32.156742 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:22:32.156751 | orchestrator | 2025-05-28 17:22:32.156756 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-05-28 17:22:32.156760 | orchestrator | Wednesday 28 May 2025 17:20:16 +0000 (0:00:02.132) 0:08:53.160 ********* 2025-05-28 17:22:32.156765 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:22:32.156769 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:22:32.156774 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:22:32.156778 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:22:32.156783 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:22:32.156787 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:22:32.156792 | orchestrator | 2025-05-28 17:22:32.156796 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-05-28 17:22:32.156801 | orchestrator | Wednesday 28 May 2025 17:20:17 +0000 (0:00:01.630) 0:08:54.791 ********* 2025-05-28 17:22:32.156808 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:22:32.156812 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:22:32.156817 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:22:32.156821 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:22:32.156826 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:22:32.156830 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:22:32.156835 | orchestrator | 2025-05-28 17:22:32.156839 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-05-28 17:22:32.156844 | orchestrator | Wednesday 28 May 2025 17:20:18 +0000 (0:00:00.947) 0:08:55.738 ********* 2025-05-28 17:22:32.156848 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 17:22:32.156853 | orchestrator | 2025-05-28 17:22:32.156858 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-05-28 17:22:32.156863 | orchestrator | Wednesday 28 May 2025 17:20:19 +0000 (0:00:01.247) 0:08:56.986 ********* 2025-05-28 17:22:32.156867 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:22:32.156872 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:22:32.156876 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:22:32.156880 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:22:32.156885 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:22:32.156889 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:22:32.156894 | orchestrator | 2025-05-28 17:22:32.156898 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-05-28 17:22:32.156903 | orchestrator | Wednesday 28 May 2025 17:20:21 +0000 (0:00:01.997) 0:08:58.984 ********* 2025-05-28 17:22:32.156907 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:22:32.156912 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:22:32.156916 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:22:32.156921 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:22:32.156925 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:22:32.156930 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:22:32.156934 | orchestrator | 2025-05-28 17:22:32.156939 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-05-28 17:22:32.156943 | orchestrator | Wednesday 28 May 2025 17:20:25 +0000 (0:00:03.204) 0:09:02.188 ********* 2025-05-28 17:22:32.156949 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 17:22:32.156953 | orchestrator | 2025-05-28 17:22:32.156958 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-05-28 17:22:32.156963 | orchestrator | Wednesday 28 May 2025 17:20:26 +0000 (0:00:01.336) 0:09:03.525 ********* 2025-05-28 17:22:32.156967 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:22:32.156972 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:22:32.156976 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:22:32.156981 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.156985 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.156990 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.156999 | orchestrator | 2025-05-28 17:22:32.157003 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-05-28 17:22:32.157008 | orchestrator | Wednesday 28 May 2025 17:20:27 +0000 (0:00:00.830) 0:09:04.355 ********* 2025-05-28 17:22:32.157013 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:22:32.157017 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:22:32.157022 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:22:32.157026 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:22:32.157030 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:22:32.157035 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:22:32.157039 | orchestrator | 2025-05-28 17:22:32.157044 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-05-28 17:22:32.157048 | orchestrator | Wednesday 28 May 2025 17:20:29 +0000 (0:00:02.282) 0:09:06.637 ********* 2025-05-28 17:22:32.157053 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:22:32.157057 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:22:32.157062 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:22:32.157066 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.157071 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.157075 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.157080 | orchestrator | 2025-05-28 17:22:32.157084 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-05-28 17:22:32.157089 | orchestrator | 2025-05-28 17:22:32.157093 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-05-28 17:22:32.157098 | orchestrator | Wednesday 28 May 2025 17:20:30 +0000 (0:00:01.161) 0:09:07.799 ********* 2025-05-28 17:22:32.157102 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 17:22:32.157107 | orchestrator | 2025-05-28 17:22:32.157112 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-05-28 17:22:32.157116 | orchestrator | Wednesday 28 May 2025 17:20:31 +0000 (0:00:00.537) 0:09:08.337 ********* 2025-05-28 17:22:32.157121 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 17:22:32.157125 | orchestrator | 2025-05-28 17:22:32.157168 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-05-28 17:22:32.157181 | orchestrator | Wednesday 28 May 2025 17:20:31 +0000 (0:00:00.766) 0:09:09.103 ********* 2025-05-28 17:22:32.157185 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.157190 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.157194 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.157199 | orchestrator | 2025-05-28 17:22:32.157203 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-05-28 17:22:32.157208 | orchestrator | Wednesday 28 May 2025 17:20:32 +0000 (0:00:00.321) 0:09:09.425 ********* 2025-05-28 17:22:32.157212 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.157217 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.157221 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.157225 | orchestrator | 2025-05-28 17:22:32.157230 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-05-28 17:22:32.157238 | orchestrator | Wednesday 28 May 2025 17:20:32 +0000 (0:00:00.683) 0:09:10.108 ********* 2025-05-28 17:22:32.157242 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.157247 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.157251 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.157256 | orchestrator | 2025-05-28 17:22:32.157260 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-05-28 17:22:32.157265 | orchestrator | Wednesday 28 May 2025 17:20:33 +0000 (0:00:00.992) 0:09:11.101 ********* 2025-05-28 17:22:32.157269 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.157273 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.157292 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.157297 | orchestrator | 2025-05-28 17:22:32.157301 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-05-28 17:22:32.157310 | orchestrator | Wednesday 28 May 2025 17:20:34 +0000 (0:00:00.681) 0:09:11.782 ********* 2025-05-28 17:22:32.157315 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.157320 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.157324 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.157329 | orchestrator | 2025-05-28 17:22:32.157333 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-05-28 17:22:32.157338 | orchestrator | Wednesday 28 May 2025 17:20:35 +0000 (0:00:00.351) 0:09:12.133 ********* 2025-05-28 17:22:32.157342 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.157347 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.157351 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.157356 | orchestrator | 2025-05-28 17:22:32.157360 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-05-28 17:22:32.157365 | orchestrator | Wednesday 28 May 2025 17:20:35 +0000 (0:00:00.341) 0:09:12.475 ********* 2025-05-28 17:22:32.157369 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.157373 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.157378 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.157382 | orchestrator | 2025-05-28 17:22:32.157387 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-05-28 17:22:32.157391 | orchestrator | Wednesday 28 May 2025 17:20:35 +0000 (0:00:00.611) 0:09:13.086 ********* 2025-05-28 17:22:32.157396 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.157400 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.157405 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.157409 | orchestrator | 2025-05-28 17:22:32.157414 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-05-28 17:22:32.157421 | orchestrator | Wednesday 28 May 2025 17:20:36 +0000 (0:00:00.783) 0:09:13.870 ********* 2025-05-28 17:22:32.157426 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.157430 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.157435 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.157439 | orchestrator | 2025-05-28 17:22:32.157444 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-05-28 17:22:32.157448 | orchestrator | Wednesday 28 May 2025 17:20:37 +0000 (0:00:00.795) 0:09:14.665 ********* 2025-05-28 17:22:32.157453 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.157457 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.157462 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.157466 | orchestrator | 2025-05-28 17:22:32.157471 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-05-28 17:22:32.157475 | orchestrator | Wednesday 28 May 2025 17:20:37 +0000 (0:00:00.287) 0:09:14.953 ********* 2025-05-28 17:22:32.157480 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.157484 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.157489 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.157493 | orchestrator | 2025-05-28 17:22:32.157498 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-05-28 17:22:32.157502 | orchestrator | Wednesday 28 May 2025 17:20:38 +0000 (0:00:00.538) 0:09:15.492 ********* 2025-05-28 17:22:32.157507 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.157511 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.157516 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.157520 | orchestrator | 2025-05-28 17:22:32.157525 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-05-28 17:22:32.157529 | orchestrator | Wednesday 28 May 2025 17:20:38 +0000 (0:00:00.308) 0:09:15.801 ********* 2025-05-28 17:22:32.157534 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.157538 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.157542 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.157547 | orchestrator | 2025-05-28 17:22:32.157551 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-05-28 17:22:32.157556 | orchestrator | Wednesday 28 May 2025 17:20:39 +0000 (0:00:00.402) 0:09:16.204 ********* 2025-05-28 17:22:32.157565 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.157569 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.157574 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.157578 | orchestrator | 2025-05-28 17:22:32.157583 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-05-28 17:22:32.157587 | orchestrator | Wednesday 28 May 2025 17:20:39 +0000 (0:00:00.325) 0:09:16.529 ********* 2025-05-28 17:22:32.157592 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.157596 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.157601 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.157605 | orchestrator | 2025-05-28 17:22:32.157610 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-05-28 17:22:32.157614 | orchestrator | Wednesday 28 May 2025 17:20:40 +0000 (0:00:00.586) 0:09:17.116 ********* 2025-05-28 17:22:32.157619 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.157623 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.157628 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.157632 | orchestrator | 2025-05-28 17:22:32.157637 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-05-28 17:22:32.157641 | orchestrator | Wednesday 28 May 2025 17:20:40 +0000 (0:00:00.304) 0:09:17.420 ********* 2025-05-28 17:22:32.157646 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.157650 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.157655 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.157659 | orchestrator | 2025-05-28 17:22:32.157664 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-05-28 17:22:32.157668 | orchestrator | Wednesday 28 May 2025 17:20:40 +0000 (0:00:00.347) 0:09:17.768 ********* 2025-05-28 17:22:32.157673 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.157680 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.157684 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.157689 | orchestrator | 2025-05-28 17:22:32.157693 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-05-28 17:22:32.157698 | orchestrator | Wednesday 28 May 2025 17:20:40 +0000 (0:00:00.338) 0:09:18.106 ********* 2025-05-28 17:22:32.157703 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.157707 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.157712 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.157716 | orchestrator | 2025-05-28 17:22:32.157721 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-05-28 17:22:32.157725 | orchestrator | Wednesday 28 May 2025 17:20:41 +0000 (0:00:00.837) 0:09:18.943 ********* 2025-05-28 17:22:32.157730 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.157735 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.157740 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-05-28 17:22:32.157744 | orchestrator | 2025-05-28 17:22:32.157749 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-05-28 17:22:32.157753 | orchestrator | Wednesday 28 May 2025 17:20:42 +0000 (0:00:00.444) 0:09:19.388 ********* 2025-05-28 17:22:32.157758 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-28 17:22:32.157762 | orchestrator | 2025-05-28 17:22:32.157767 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-05-28 17:22:32.157771 | orchestrator | Wednesday 28 May 2025 17:20:44 +0000 (0:00:02.059) 0:09:21.447 ********* 2025-05-28 17:22:32.157777 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-05-28 17:22:32.157783 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.157787 | orchestrator | 2025-05-28 17:22:32.157792 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-05-28 17:22:32.157796 | orchestrator | Wednesday 28 May 2025 17:20:44 +0000 (0:00:00.211) 0:09:21.659 ********* 2025-05-28 17:22:32.157808 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-28 17:22:32.157815 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-28 17:22:32.157820 | orchestrator | 2025-05-28 17:22:32.157824 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-05-28 17:22:32.157829 | orchestrator | Wednesday 28 May 2025 17:20:53 +0000 (0:00:08.725) 0:09:30.384 ********* 2025-05-28 17:22:32.157833 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-28 17:22:32.157838 | orchestrator | 2025-05-28 17:22:32.157842 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-05-28 17:22:32.157847 | orchestrator | Wednesday 28 May 2025 17:20:57 +0000 (0:00:03.898) 0:09:34.283 ********* 2025-05-28 17:22:32.157852 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 17:22:32.157856 | orchestrator | 2025-05-28 17:22:32.157861 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-05-28 17:22:32.157865 | orchestrator | Wednesday 28 May 2025 17:20:57 +0000 (0:00:00.534) 0:09:34.818 ********* 2025-05-28 17:22:32.157870 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-05-28 17:22:32.157874 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-05-28 17:22:32.157879 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-05-28 17:22:32.157883 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-05-28 17:22:32.157888 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-05-28 17:22:32.157893 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-05-28 17:22:32.157897 | orchestrator | 2025-05-28 17:22:32.157902 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-05-28 17:22:32.157906 | orchestrator | Wednesday 28 May 2025 17:20:58 +0000 (0:00:01.041) 0:09:35.860 ********* 2025-05-28 17:22:32.157911 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-28 17:22:32.157915 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-28 17:22:32.157920 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-28 17:22:32.157924 | orchestrator | 2025-05-28 17:22:32.157929 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-05-28 17:22:32.157933 | orchestrator | Wednesday 28 May 2025 17:21:01 +0000 (0:00:02.435) 0:09:38.296 ********* 2025-05-28 17:22:32.157938 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-28 17:22:32.157943 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-28 17:22:32.157947 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:22:32.157952 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-28 17:22:32.157956 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-28 17:22:32.157961 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:22:32.157965 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-28 17:22:32.157970 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-28 17:22:32.157977 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:22:32.157981 | orchestrator | 2025-05-28 17:22:32.157986 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-05-28 17:22:32.157990 | orchestrator | Wednesday 28 May 2025 17:21:02 +0000 (0:00:01.428) 0:09:39.724 ********* 2025-05-28 17:22:32.157995 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:22:32.158000 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:22:32.158008 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:22:32.158012 | orchestrator | 2025-05-28 17:22:32.158042 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-05-28 17:22:32.158047 | orchestrator | Wednesday 28 May 2025 17:21:05 +0000 (0:00:02.960) 0:09:42.684 ********* 2025-05-28 17:22:32.158051 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.158056 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.158060 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.158065 | orchestrator | 2025-05-28 17:22:32.158069 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-05-28 17:22:32.158074 | orchestrator | Wednesday 28 May 2025 17:21:06 +0000 (0:00:00.433) 0:09:43.118 ********* 2025-05-28 17:22:32.158079 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 17:22:32.158083 | orchestrator | 2025-05-28 17:22:32.158088 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-05-28 17:22:32.158092 | orchestrator | Wednesday 28 May 2025 17:21:07 +0000 (0:00:00.994) 0:09:44.112 ********* 2025-05-28 17:22:32.158097 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 17:22:32.158101 | orchestrator | 2025-05-28 17:22:32.158106 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-05-28 17:22:32.158110 | orchestrator | Wednesday 28 May 2025 17:21:07 +0000 (0:00:00.599) 0:09:44.712 ********* 2025-05-28 17:22:32.158115 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:22:32.158119 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:22:32.158124 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:22:32.158128 | orchestrator | 2025-05-28 17:22:32.158132 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-05-28 17:22:32.158137 | orchestrator | Wednesday 28 May 2025 17:21:08 +0000 (0:00:01.264) 0:09:45.976 ********* 2025-05-28 17:22:32.158145 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:22:32.158149 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:22:32.158154 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:22:32.158158 | orchestrator | 2025-05-28 17:22:32.158163 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-05-28 17:22:32.158167 | orchestrator | Wednesday 28 May 2025 17:21:10 +0000 (0:00:01.361) 0:09:47.338 ********* 2025-05-28 17:22:32.158172 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:22:32.158176 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:22:32.158181 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:22:32.158185 | orchestrator | 2025-05-28 17:22:32.158190 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-05-28 17:22:32.158194 | orchestrator | Wednesday 28 May 2025 17:21:12 +0000 (0:00:01.850) 0:09:49.188 ********* 2025-05-28 17:22:32.158199 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:22:32.158203 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:22:32.158208 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:22:32.158212 | orchestrator | 2025-05-28 17:22:32.158217 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-05-28 17:22:32.158221 | orchestrator | Wednesday 28 May 2025 17:21:13 +0000 (0:00:01.909) 0:09:51.098 ********* 2025-05-28 17:22:32.158226 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.158230 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.158235 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.158239 | orchestrator | 2025-05-28 17:22:32.158244 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-05-28 17:22:32.158248 | orchestrator | Wednesday 28 May 2025 17:21:15 +0000 (0:00:01.463) 0:09:52.561 ********* 2025-05-28 17:22:32.158253 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:22:32.158257 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:22:32.158262 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:22:32.158266 | orchestrator | 2025-05-28 17:22:32.158271 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-05-28 17:22:32.158290 | orchestrator | Wednesday 28 May 2025 17:21:16 +0000 (0:00:00.671) 0:09:53.233 ********* 2025-05-28 17:22:32.158295 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 17:22:32.158300 | orchestrator | 2025-05-28 17:22:32.158304 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-05-28 17:22:32.158309 | orchestrator | Wednesday 28 May 2025 17:21:16 +0000 (0:00:00.726) 0:09:53.959 ********* 2025-05-28 17:22:32.158313 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.158318 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.158322 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.158326 | orchestrator | 2025-05-28 17:22:32.158331 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-05-28 17:22:32.158335 | orchestrator | Wednesday 28 May 2025 17:21:17 +0000 (0:00:00.312) 0:09:54.271 ********* 2025-05-28 17:22:32.158340 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:22:32.158344 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:22:32.158349 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:22:32.158353 | orchestrator | 2025-05-28 17:22:32.158358 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-05-28 17:22:32.158362 | orchestrator | Wednesday 28 May 2025 17:21:18 +0000 (0:00:01.179) 0:09:55.451 ********* 2025-05-28 17:22:32.158367 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-28 17:22:32.158371 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-28 17:22:32.158375 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-28 17:22:32.158380 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.158384 | orchestrator | 2025-05-28 17:22:32.158389 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-05-28 17:22:32.158396 | orchestrator | Wednesday 28 May 2025 17:21:19 +0000 (0:00:00.820) 0:09:56.271 ********* 2025-05-28 17:22:32.158401 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.158405 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.158410 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.158414 | orchestrator | 2025-05-28 17:22:32.158419 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-05-28 17:22:32.158423 | orchestrator | 2025-05-28 17:22:32.158428 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-05-28 17:22:32.158432 | orchestrator | Wednesday 28 May 2025 17:21:19 +0000 (0:00:00.754) 0:09:57.026 ********* 2025-05-28 17:22:32.158437 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 17:22:32.158441 | orchestrator | 2025-05-28 17:22:32.158446 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-05-28 17:22:32.158450 | orchestrator | Wednesday 28 May 2025 17:21:20 +0000 (0:00:00.493) 0:09:57.519 ********* 2025-05-28 17:22:32.158455 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 17:22:32.158459 | orchestrator | 2025-05-28 17:22:32.158464 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-05-28 17:22:32.158469 | orchestrator | Wednesday 28 May 2025 17:21:21 +0000 (0:00:00.719) 0:09:58.238 ********* 2025-05-28 17:22:32.158473 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.158477 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.158482 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.158486 | orchestrator | 2025-05-28 17:22:32.158491 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-05-28 17:22:32.158495 | orchestrator | Wednesday 28 May 2025 17:21:21 +0000 (0:00:00.296) 0:09:58.535 ********* 2025-05-28 17:22:32.158500 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.158504 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.158509 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.158518 | orchestrator | 2025-05-28 17:22:32.158522 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-05-28 17:22:32.158526 | orchestrator | Wednesday 28 May 2025 17:21:22 +0000 (0:00:00.709) 0:09:59.244 ********* 2025-05-28 17:22:32.158531 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.158535 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.158543 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.158548 | orchestrator | 2025-05-28 17:22:32.158552 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-05-28 17:22:32.158557 | orchestrator | Wednesday 28 May 2025 17:21:22 +0000 (0:00:00.729) 0:09:59.973 ********* 2025-05-28 17:22:32.158561 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.158566 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.158570 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.158574 | orchestrator | 2025-05-28 17:22:32.158579 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-05-28 17:22:32.158583 | orchestrator | Wednesday 28 May 2025 17:21:23 +0000 (0:00:01.005) 0:10:00.979 ********* 2025-05-28 17:22:32.158588 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.158592 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.158597 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.158601 | orchestrator | 2025-05-28 17:22:32.158606 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-05-28 17:22:32.158610 | orchestrator | Wednesday 28 May 2025 17:21:24 +0000 (0:00:00.313) 0:10:01.293 ********* 2025-05-28 17:22:32.158615 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.158619 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.158624 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.158628 | orchestrator | 2025-05-28 17:22:32.158633 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-05-28 17:22:32.158637 | orchestrator | Wednesday 28 May 2025 17:21:24 +0000 (0:00:00.292) 0:10:01.586 ********* 2025-05-28 17:22:32.158641 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.158646 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.158650 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.158655 | orchestrator | 2025-05-28 17:22:32.158659 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-05-28 17:22:32.158664 | orchestrator | Wednesday 28 May 2025 17:21:24 +0000 (0:00:00.293) 0:10:01.879 ********* 2025-05-28 17:22:32.158668 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.158673 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.158677 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.158682 | orchestrator | 2025-05-28 17:22:32.158686 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-05-28 17:22:32.158690 | orchestrator | Wednesday 28 May 2025 17:21:25 +0000 (0:00:00.949) 0:10:02.829 ********* 2025-05-28 17:22:32.158695 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.158699 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.158704 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.158708 | orchestrator | 2025-05-28 17:22:32.158713 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-05-28 17:22:32.158717 | orchestrator | Wednesday 28 May 2025 17:21:26 +0000 (0:00:00.706) 0:10:03.536 ********* 2025-05-28 17:22:32.158722 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.158726 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.158731 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.158735 | orchestrator | 2025-05-28 17:22:32.158740 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-05-28 17:22:32.158744 | orchestrator | Wednesday 28 May 2025 17:21:26 +0000 (0:00:00.279) 0:10:03.816 ********* 2025-05-28 17:22:32.158748 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.158753 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.158757 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.158762 | orchestrator | 2025-05-28 17:22:32.158766 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-05-28 17:22:32.158774 | orchestrator | Wednesday 28 May 2025 17:21:26 +0000 (0:00:00.290) 0:10:04.106 ********* 2025-05-28 17:22:32.158779 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.158783 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.158788 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.158792 | orchestrator | 2025-05-28 17:22:32.158799 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-05-28 17:22:32.158803 | orchestrator | Wednesday 28 May 2025 17:21:27 +0000 (0:00:00.556) 0:10:04.663 ********* 2025-05-28 17:22:32.158808 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.158812 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.158817 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.158821 | orchestrator | 2025-05-28 17:22:32.158826 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-05-28 17:22:32.158830 | orchestrator | Wednesday 28 May 2025 17:21:27 +0000 (0:00:00.319) 0:10:04.982 ********* 2025-05-28 17:22:32.158835 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.158839 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.158844 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.158848 | orchestrator | 2025-05-28 17:22:32.158853 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-05-28 17:22:32.158857 | orchestrator | Wednesday 28 May 2025 17:21:28 +0000 (0:00:00.325) 0:10:05.307 ********* 2025-05-28 17:22:32.158861 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.158866 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.158870 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.158875 | orchestrator | 2025-05-28 17:22:32.158879 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-05-28 17:22:32.158884 | orchestrator | Wednesday 28 May 2025 17:21:28 +0000 (0:00:00.282) 0:10:05.590 ********* 2025-05-28 17:22:32.158888 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.158893 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.158897 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.158902 | orchestrator | 2025-05-28 17:22:32.158906 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-05-28 17:22:32.158910 | orchestrator | Wednesday 28 May 2025 17:21:29 +0000 (0:00:00.541) 0:10:06.131 ********* 2025-05-28 17:22:32.158915 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.158919 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.158924 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.158928 | orchestrator | 2025-05-28 17:22:32.158933 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-05-28 17:22:32.158937 | orchestrator | Wednesday 28 May 2025 17:21:29 +0000 (0:00:00.313) 0:10:06.444 ********* 2025-05-28 17:22:32.158942 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.158946 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.158951 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.158955 | orchestrator | 2025-05-28 17:22:32.158962 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-05-28 17:22:32.158967 | orchestrator | Wednesday 28 May 2025 17:21:29 +0000 (0:00:00.328) 0:10:06.772 ********* 2025-05-28 17:22:32.158972 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.158976 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.158980 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.158985 | orchestrator | 2025-05-28 17:22:32.158989 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-05-28 17:22:32.158994 | orchestrator | Wednesday 28 May 2025 17:21:30 +0000 (0:00:00.588) 0:10:07.361 ********* 2025-05-28 17:22:32.158998 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 17:22:32.159003 | orchestrator | 2025-05-28 17:22:32.159007 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-05-28 17:22:32.159012 | orchestrator | Wednesday 28 May 2025 17:21:30 +0000 (0:00:00.495) 0:10:07.857 ********* 2025-05-28 17:22:32.159016 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-28 17:22:32.159024 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-28 17:22:32.159029 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-28 17:22:32.159033 | orchestrator | 2025-05-28 17:22:32.159038 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-05-28 17:22:32.159042 | orchestrator | Wednesday 28 May 2025 17:21:32 +0000 (0:00:02.113) 0:10:09.970 ********* 2025-05-28 17:22:32.159047 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-28 17:22:32.159051 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-28 17:22:32.159056 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:22:32.159060 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-28 17:22:32.159065 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-28 17:22:32.159069 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:22:32.159074 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-28 17:22:32.159078 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-28 17:22:32.159082 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:22:32.159087 | orchestrator | 2025-05-28 17:22:32.159091 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-05-28 17:22:32.159096 | orchestrator | Wednesday 28 May 2025 17:21:34 +0000 (0:00:01.312) 0:10:11.282 ********* 2025-05-28 17:22:32.159100 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.159105 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.159109 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.159114 | orchestrator | 2025-05-28 17:22:32.159118 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-05-28 17:22:32.159123 | orchestrator | Wednesday 28 May 2025 17:21:34 +0000 (0:00:00.291) 0:10:11.574 ********* 2025-05-28 17:22:32.159127 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 17:22:32.159132 | orchestrator | 2025-05-28 17:22:32.159136 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-05-28 17:22:32.159141 | orchestrator | Wednesday 28 May 2025 17:21:34 +0000 (0:00:00.530) 0:10:12.104 ********* 2025-05-28 17:22:32.159145 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-28 17:22:32.159152 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-28 17:22:32.159157 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-28 17:22:32.159162 | orchestrator | 2025-05-28 17:22:32.159166 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-05-28 17:22:32.159171 | orchestrator | Wednesday 28 May 2025 17:21:36 +0000 (0:00:01.317) 0:10:13.421 ********* 2025-05-28 17:22:32.159175 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-28 17:22:32.159180 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-05-28 17:22:32.159185 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-28 17:22:32.159189 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-05-28 17:22:32.159194 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-28 17:22:32.159198 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-05-28 17:22:32.159203 | orchestrator | 2025-05-28 17:22:32.159207 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-05-28 17:22:32.159217 | orchestrator | Wednesday 28 May 2025 17:21:40 +0000 (0:00:04.630) 0:10:18.051 ********* 2025-05-28 17:22:32.159221 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-28 17:22:32.159226 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-28 17:22:32.159230 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-28 17:22:32.159235 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-28 17:22:32.159239 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-28 17:22:32.159247 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-28 17:22:32.159251 | orchestrator | 2025-05-28 17:22:32.159256 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-05-28 17:22:32.159260 | orchestrator | Wednesday 28 May 2025 17:21:43 +0000 (0:00:02.182) 0:10:20.234 ********* 2025-05-28 17:22:32.159265 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-28 17:22:32.159269 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:22:32.159274 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-28 17:22:32.159308 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:22:32.159313 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-28 17:22:32.159318 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:22:32.159322 | orchestrator | 2025-05-28 17:22:32.159327 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-05-28 17:22:32.159331 | orchestrator | Wednesday 28 May 2025 17:21:44 +0000 (0:00:01.201) 0:10:21.435 ********* 2025-05-28 17:22:32.159336 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-05-28 17:22:32.159340 | orchestrator | 2025-05-28 17:22:32.159345 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-05-28 17:22:32.159349 | orchestrator | Wednesday 28 May 2025 17:21:44 +0000 (0:00:00.242) 0:10:21.678 ********* 2025-05-28 17:22:32.159354 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-28 17:22:32.159358 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-28 17:22:32.159363 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-28 17:22:32.159368 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-28 17:22:32.159372 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-28 17:22:32.159376 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.159381 | orchestrator | 2025-05-28 17:22:32.159385 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-05-28 17:22:32.159390 | orchestrator | Wednesday 28 May 2025 17:21:45 +0000 (0:00:00.801) 0:10:22.479 ********* 2025-05-28 17:22:32.159394 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-28 17:22:32.159399 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-28 17:22:32.159403 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-28 17:22:32.159408 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-28 17:22:32.159412 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-28 17:22:32.159417 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.159434 | orchestrator | 2025-05-28 17:22:32.159442 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-05-28 17:22:32.159447 | orchestrator | Wednesday 28 May 2025 17:21:46 +0000 (0:00:01.087) 0:10:23.566 ********* 2025-05-28 17:22:32.159451 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-28 17:22:32.159456 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-28 17:22:32.159461 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-28 17:22:32.159466 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-28 17:22:32.159470 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-28 17:22:32.159475 | orchestrator | 2025-05-28 17:22:32.159479 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-05-28 17:22:32.159484 | orchestrator | Wednesday 28 May 2025 17:22:18 +0000 (0:00:31.844) 0:10:55.411 ********* 2025-05-28 17:22:32.159488 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.159493 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.159497 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.159501 | orchestrator | 2025-05-28 17:22:32.159506 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-05-28 17:22:32.159510 | orchestrator | Wednesday 28 May 2025 17:22:18 +0000 (0:00:00.313) 0:10:55.724 ********* 2025-05-28 17:22:32.159515 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.159519 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.159524 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.159528 | orchestrator | 2025-05-28 17:22:32.159536 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-05-28 17:22:32.159541 | orchestrator | Wednesday 28 May 2025 17:22:18 +0000 (0:00:00.316) 0:10:56.040 ********* 2025-05-28 17:22:32.159545 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 17:22:32.159549 | orchestrator | 2025-05-28 17:22:32.159553 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-05-28 17:22:32.159557 | orchestrator | Wednesday 28 May 2025 17:22:19 +0000 (0:00:00.748) 0:10:56.789 ********* 2025-05-28 17:22:32.159561 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 17:22:32.159565 | orchestrator | 2025-05-28 17:22:32.159569 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-05-28 17:22:32.159573 | orchestrator | Wednesday 28 May 2025 17:22:20 +0000 (0:00:00.534) 0:10:57.324 ********* 2025-05-28 17:22:32.159577 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:22:32.159581 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:22:32.159585 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:22:32.159589 | orchestrator | 2025-05-28 17:22:32.159593 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-05-28 17:22:32.159597 | orchestrator | Wednesday 28 May 2025 17:22:21 +0000 (0:00:01.426) 0:10:58.750 ********* 2025-05-28 17:22:32.159601 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:22:32.159606 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:22:32.159610 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:22:32.159614 | orchestrator | 2025-05-28 17:22:32.159618 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-05-28 17:22:32.159622 | orchestrator | Wednesday 28 May 2025 17:22:23 +0000 (0:00:01.472) 0:11:00.223 ********* 2025-05-28 17:22:32.159629 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:22:32.159633 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:22:32.159637 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:22:32.159641 | orchestrator | 2025-05-28 17:22:32.159645 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-05-28 17:22:32.159649 | orchestrator | Wednesday 28 May 2025 17:22:24 +0000 (0:00:01.836) 0:11:02.059 ********* 2025-05-28 17:22:32.159654 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-28 17:22:32.159658 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-28 17:22:32.159662 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-28 17:22:32.159666 | orchestrator | 2025-05-28 17:22:32.159670 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-05-28 17:22:32.159674 | orchestrator | Wednesday 28 May 2025 17:22:27 +0000 (0:00:02.552) 0:11:04.612 ********* 2025-05-28 17:22:32.159678 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.159682 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.159686 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.159690 | orchestrator | 2025-05-28 17:22:32.159694 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-05-28 17:22:32.159698 | orchestrator | Wednesday 28 May 2025 17:22:27 +0000 (0:00:00.326) 0:11:04.939 ********* 2025-05-28 17:22:32.159702 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 17:22:32.159706 | orchestrator | 2025-05-28 17:22:32.159712 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-05-28 17:22:32.159717 | orchestrator | Wednesday 28 May 2025 17:22:28 +0000 (0:00:00.484) 0:11:05.423 ********* 2025-05-28 17:22:32.159721 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.159725 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.159729 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.159733 | orchestrator | 2025-05-28 17:22:32.159737 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-05-28 17:22:32.159741 | orchestrator | Wednesday 28 May 2025 17:22:28 +0000 (0:00:00.579) 0:11:06.003 ********* 2025-05-28 17:22:32.159745 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.159749 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:22:32.159753 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:22:32.159757 | orchestrator | 2025-05-28 17:22:32.159761 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-05-28 17:22:32.159765 | orchestrator | Wednesday 28 May 2025 17:22:29 +0000 (0:00:00.343) 0:11:06.346 ********* 2025-05-28 17:22:32.159769 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-28 17:22:32.159773 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-28 17:22:32.159778 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-28 17:22:32.159782 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:22:32.159786 | orchestrator | 2025-05-28 17:22:32.159790 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-05-28 17:22:32.159794 | orchestrator | Wednesday 28 May 2025 17:22:29 +0000 (0:00:00.585) 0:11:06.932 ********* 2025-05-28 17:22:32.159798 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:22:32.159802 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:22:32.159806 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:22:32.159810 | orchestrator | 2025-05-28 17:22:32.159814 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 17:22:32.159819 | orchestrator | testbed-node-0 : ok=141  changed=36  unreachable=0 failed=0 skipped=135  rescued=0 ignored=0 2025-05-28 17:22:32.159831 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-05-28 17:22:32.159840 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-05-28 17:22:32.159844 | orchestrator | testbed-node-3 : ok=186  changed=44  unreachable=0 failed=0 skipped=152  rescued=0 ignored=0 2025-05-28 17:22:32.159848 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-05-28 17:22:32.159852 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-05-28 17:22:32.159857 | orchestrator | 2025-05-28 17:22:32.159861 | orchestrator | 2025-05-28 17:22:32.159865 | orchestrator | 2025-05-28 17:22:32.159869 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 17:22:32.159873 | orchestrator | Wednesday 28 May 2025 17:22:30 +0000 (0:00:00.223) 0:11:07.156 ********* 2025-05-28 17:22:32.159877 | orchestrator | =============================================================================== 2025-05-28 17:22:32.159881 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 69.55s 2025-05-28 17:22:32.159885 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 39.46s 2025-05-28 17:22:32.159889 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.84s 2025-05-28 17:22:32.159893 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 30.18s 2025-05-28 17:22:32.159897 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.76s 2025-05-28 17:22:32.159901 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.48s 2025-05-28 17:22:32.159905 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.85s 2025-05-28 17:22:32.159909 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.44s 2025-05-28 17:22:32.159913 | orchestrator | ceph-mon : Fetch ceph initial keys ------------------------------------- 10.12s 2025-05-28 17:22:32.159917 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.73s 2025-05-28 17:22:32.159921 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.47s 2025-05-28 17:22:32.159925 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.14s 2025-05-28 17:22:32.159929 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.85s 2025-05-28 17:22:32.159934 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.63s 2025-05-28 17:22:32.159938 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 3.96s 2025-05-28 17:22:32.159942 | orchestrator | ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created --- 3.95s 2025-05-28 17:22:32.159946 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.90s 2025-05-28 17:22:32.159950 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.52s 2025-05-28 17:22:32.159954 | orchestrator | ceph-osd : Unset noup flag ---------------------------------------------- 3.44s 2025-05-28 17:22:32.159958 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.43s 2025-05-28 17:22:35.195756 | orchestrator | 2025-05-28 17:22:35 | INFO  | Task 75f8a76b-6ea2-42d1-99f7-97e14c9e1a7d is in state STARTED 2025-05-28 17:22:35.197533 | orchestrator | 2025-05-28 17:22:35 | INFO  | Task 4fcc0b0c-bcde-4847-b04f-c856fbe593ed is in state STARTED 2025-05-28 17:22:35.206226 | orchestrator | 2025-05-28 17:22:35 | INFO  | Task 37cafdb3-9b68-47a1-a54a-4713396a7016 is in state STARTED 2025-05-28 17:22:35.206918 | orchestrator | 2025-05-28 17:22:35 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:22:38.265485 | orchestrator | 2025-05-28 17:22:38 | INFO  | Task 75f8a76b-6ea2-42d1-99f7-97e14c9e1a7d is in state STARTED 2025-05-28 17:22:38.269624 | orchestrator | 2025-05-28 17:22:38 | INFO  | Task 4fcc0b0c-bcde-4847-b04f-c856fbe593ed is in state STARTED 2025-05-28 17:22:38.271370 | orchestrator | 2025-05-28 17:22:38 | INFO  | Task 37cafdb3-9b68-47a1-a54a-4713396a7016 is in state STARTED 2025-05-28 17:22:38.271909 | orchestrator | 2025-05-28 17:22:38 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:22:41.325123 | orchestrator | 2025-05-28 17:22:41 | INFO  | Task 75f8a76b-6ea2-42d1-99f7-97e14c9e1a7d is in state STARTED 2025-05-28 17:22:41.327116 | orchestrator | 2025-05-28 17:22:41 | INFO  | Task 4fcc0b0c-bcde-4847-b04f-c856fbe593ed is in state STARTED 2025-05-28 17:22:41.329490 | orchestrator | 2025-05-28 17:22:41 | INFO  | Task 37cafdb3-9b68-47a1-a54a-4713396a7016 is in state STARTED 2025-05-28 17:22:41.329862 | orchestrator | 2025-05-28 17:22:41 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:22:44.374975 | orchestrator | 2025-05-28 17:22:44 | INFO  | Task 75f8a76b-6ea2-42d1-99f7-97e14c9e1a7d is in state STARTED 2025-05-28 17:22:44.375507 | orchestrator | 2025-05-28 17:22:44 | INFO  | Task 4fcc0b0c-bcde-4847-b04f-c856fbe593ed is in state STARTED 2025-05-28 17:22:44.376356 | orchestrator | 2025-05-28 17:22:44 | INFO  | Task 37cafdb3-9b68-47a1-a54a-4713396a7016 is in state STARTED 2025-05-28 17:22:44.376389 | orchestrator | 2025-05-28 17:22:44 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:22:47.425500 | orchestrator | 2025-05-28 17:22:47 | INFO  | Task 75f8a76b-6ea2-42d1-99f7-97e14c9e1a7d is in state STARTED 2025-05-28 17:22:47.425808 | orchestrator | 2025-05-28 17:22:47 | INFO  | Task 4fcc0b0c-bcde-4847-b04f-c856fbe593ed is in state STARTED 2025-05-28 17:22:47.428014 | orchestrator | 2025-05-28 17:22:47 | INFO  | Task 37cafdb3-9b68-47a1-a54a-4713396a7016 is in state STARTED 2025-05-28 17:22:47.428352 | orchestrator | 2025-05-28 17:22:47 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:22:50.474539 | orchestrator | 2025-05-28 17:22:50 | INFO  | Task 75f8a76b-6ea2-42d1-99f7-97e14c9e1a7d is in state STARTED 2025-05-28 17:22:50.474791 | orchestrator | 2025-05-28 17:22:50 | INFO  | Task 4fcc0b0c-bcde-4847-b04f-c856fbe593ed is in state STARTED 2025-05-28 17:22:50.476390 | orchestrator | 2025-05-28 17:22:50 | INFO  | Task 37cafdb3-9b68-47a1-a54a-4713396a7016 is in state STARTED 2025-05-28 17:22:50.476663 | orchestrator | 2025-05-28 17:22:50 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:22:53.515564 | orchestrator | 2025-05-28 17:22:53 | INFO  | Task 75f8a76b-6ea2-42d1-99f7-97e14c9e1a7d is in state STARTED 2025-05-28 17:22:53.516391 | orchestrator | 2025-05-28 17:22:53 | INFO  | Task 4fcc0b0c-bcde-4847-b04f-c856fbe593ed is in state STARTED 2025-05-28 17:22:53.519136 | orchestrator | 2025-05-28 17:22:53 | INFO  | Task 37cafdb3-9b68-47a1-a54a-4713396a7016 is in state STARTED 2025-05-28 17:22:53.519168 | orchestrator | 2025-05-28 17:22:53 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:22:56.576004 | orchestrator | 2025-05-28 17:22:56 | INFO  | Task 75f8a76b-6ea2-42d1-99f7-97e14c9e1a7d is in state STARTED 2025-05-28 17:22:56.577675 | orchestrator | 2025-05-28 17:22:56 | INFO  | Task 4fcc0b0c-bcde-4847-b04f-c856fbe593ed is in state STARTED 2025-05-28 17:22:56.579366 | orchestrator | 2025-05-28 17:22:56 | INFO  | Task 37cafdb3-9b68-47a1-a54a-4713396a7016 is in state STARTED 2025-05-28 17:22:56.579856 | orchestrator | 2025-05-28 17:22:56 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:22:59.626900 | orchestrator | 2025-05-28 17:22:59 | INFO  | Task 75f8a76b-6ea2-42d1-99f7-97e14c9e1a7d is in state STARTED 2025-05-28 17:22:59.628647 | orchestrator | 2025-05-28 17:22:59 | INFO  | Task 4fcc0b0c-bcde-4847-b04f-c856fbe593ed is in state STARTED 2025-05-28 17:22:59.630957 | orchestrator | 2025-05-28 17:22:59 | INFO  | Task 37cafdb3-9b68-47a1-a54a-4713396a7016 is in state STARTED 2025-05-28 17:22:59.631370 | orchestrator | 2025-05-28 17:22:59 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:23:02.688538 | orchestrator | 2025-05-28 17:23:02 | INFO  | Task 75f8a76b-6ea2-42d1-99f7-97e14c9e1a7d is in state STARTED 2025-05-28 17:23:02.690849 | orchestrator | 2025-05-28 17:23:02 | INFO  | Task 4fcc0b0c-bcde-4847-b04f-c856fbe593ed is in state STARTED 2025-05-28 17:23:02.694456 | orchestrator | 2025-05-28 17:23:02 | INFO  | Task 37cafdb3-9b68-47a1-a54a-4713396a7016 is in state STARTED 2025-05-28 17:23:02.697782 | orchestrator | 2025-05-28 17:23:02 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:23:05.754617 | orchestrator | 2025-05-28 17:23:05 | INFO  | Task 75f8a76b-6ea2-42d1-99f7-97e14c9e1a7d is in state STARTED 2025-05-28 17:23:05.757130 | orchestrator | 2025-05-28 17:23:05 | INFO  | Task 4fcc0b0c-bcde-4847-b04f-c856fbe593ed is in state STARTED 2025-05-28 17:23:05.759690 | orchestrator | 2025-05-28 17:23:05 | INFO  | Task 37cafdb3-9b68-47a1-a54a-4713396a7016 is in state STARTED 2025-05-28 17:23:05.760070 | orchestrator | 2025-05-28 17:23:05 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:23:08.818112 | orchestrator | 2025-05-28 17:23:08 | INFO  | Task 75f8a76b-6ea2-42d1-99f7-97e14c9e1a7d is in state STARTED 2025-05-28 17:23:08.821005 | orchestrator | 2025-05-28 17:23:08 | INFO  | Task 4fcc0b0c-bcde-4847-b04f-c856fbe593ed is in state STARTED 2025-05-28 17:23:08.825032 | orchestrator | 2025-05-28 17:23:08 | INFO  | Task 37cafdb3-9b68-47a1-a54a-4713396a7016 is in state STARTED 2025-05-28 17:23:08.825072 | orchestrator | 2025-05-28 17:23:08 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:23:11.873729 | orchestrator | 2025-05-28 17:23:11 | INFO  | Task 75f8a76b-6ea2-42d1-99f7-97e14c9e1a7d is in state STARTED 2025-05-28 17:23:11.875497 | orchestrator | 2025-05-28 17:23:11 | INFO  | Task 4fcc0b0c-bcde-4847-b04f-c856fbe593ed is in state STARTED 2025-05-28 17:23:11.877638 | orchestrator | 2025-05-28 17:23:11 | INFO  | Task 37cafdb3-9b68-47a1-a54a-4713396a7016 is in state STARTED 2025-05-28 17:23:11.877847 | orchestrator | 2025-05-28 17:23:11 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:23:14.928167 | orchestrator | 2025-05-28 17:23:14 | INFO  | Task 75f8a76b-6ea2-42d1-99f7-97e14c9e1a7d is in state STARTED 2025-05-28 17:23:14.930680 | orchestrator | 2025-05-28 17:23:14 | INFO  | Task 4fcc0b0c-bcde-4847-b04f-c856fbe593ed is in state STARTED 2025-05-28 17:23:14.933463 | orchestrator | 2025-05-28 17:23:14 | INFO  | Task 37cafdb3-9b68-47a1-a54a-4713396a7016 is in state STARTED 2025-05-28 17:23:14.933501 | orchestrator | 2025-05-28 17:23:14 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:23:17.983716 | orchestrator | 2025-05-28 17:23:17 | INFO  | Task 75f8a76b-6ea2-42d1-99f7-97e14c9e1a7d is in state STARTED 2025-05-28 17:23:17.986440 | orchestrator | 2025-05-28 17:23:17 | INFO  | Task 4fcc0b0c-bcde-4847-b04f-c856fbe593ed is in state STARTED 2025-05-28 17:23:17.989951 | orchestrator | 2025-05-28 17:23:17 | INFO  | Task 37cafdb3-9b68-47a1-a54a-4713396a7016 is in state STARTED 2025-05-28 17:23:17.990664 | orchestrator | 2025-05-28 17:23:17 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:23:21.051607 | orchestrator | 2025-05-28 17:23:21 | INFO  | Task 75f8a76b-6ea2-42d1-99f7-97e14c9e1a7d is in state STARTED 2025-05-28 17:23:21.054978 | orchestrator | 2025-05-28 17:23:21 | INFO  | Task 4fcc0b0c-bcde-4847-b04f-c856fbe593ed is in state STARTED 2025-05-28 17:23:21.057781 | orchestrator | 2025-05-28 17:23:21 | INFO  | Task 37cafdb3-9b68-47a1-a54a-4713396a7016 is in state STARTED 2025-05-28 17:23:21.057833 | orchestrator | 2025-05-28 17:23:21 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:23:24.104013 | orchestrator | 2025-05-28 17:23:24 | INFO  | Task 75f8a76b-6ea2-42d1-99f7-97e14c9e1a7d is in state STARTED 2025-05-28 17:23:24.105717 | orchestrator | 2025-05-28 17:23:24 | INFO  | Task 4fcc0b0c-bcde-4847-b04f-c856fbe593ed is in state STARTED 2025-05-28 17:23:24.108580 | orchestrator | 2025-05-28 17:23:24 | INFO  | Task 37cafdb3-9b68-47a1-a54a-4713396a7016 is in state STARTED 2025-05-28 17:23:24.108627 | orchestrator | 2025-05-28 17:23:24 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:23:27.158624 | orchestrator | 2025-05-28 17:23:27 | INFO  | Task 75f8a76b-6ea2-42d1-99f7-97e14c9e1a7d is in state SUCCESS 2025-05-28 17:23:27.163742 | orchestrator | 2025-05-28 17:23:27.163808 | orchestrator | 2025-05-28 17:23:27.163821 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-28 17:23:27.163835 | orchestrator | 2025-05-28 17:23:27.163845 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-28 17:23:27.163856 | orchestrator | Wednesday 28 May 2025 17:20:23 +0000 (0:00:00.254) 0:00:00.254 ********* 2025-05-28 17:23:27.163867 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:23:27.163878 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:23:27.163888 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:23:27.163898 | orchestrator | 2025-05-28 17:23:27.163909 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-28 17:23:27.163919 | orchestrator | Wednesday 28 May 2025 17:20:23 +0000 (0:00:00.283) 0:00:00.537 ********* 2025-05-28 17:23:27.163930 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-05-28 17:23:27.163940 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-05-28 17:23:27.163950 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-05-28 17:23:27.163961 | orchestrator | 2025-05-28 17:23:27.163971 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-05-28 17:23:27.163981 | orchestrator | 2025-05-28 17:23:27.163992 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-28 17:23:27.164002 | orchestrator | Wednesday 28 May 2025 17:20:24 +0000 (0:00:00.396) 0:00:00.934 ********* 2025-05-28 17:23:27.164018 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:23:27.164035 | orchestrator | 2025-05-28 17:23:27.164046 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-05-28 17:23:27.164055 | orchestrator | Wednesday 28 May 2025 17:20:24 +0000 (0:00:00.507) 0:00:01.441 ********* 2025-05-28 17:23:27.164065 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-28 17:23:27.164074 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-28 17:23:27.164084 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-28 17:23:27.164098 | orchestrator | 2025-05-28 17:23:27.164108 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-05-28 17:23:27.164118 | orchestrator | Wednesday 28 May 2025 17:20:25 +0000 (0:00:00.637) 0:00:02.079 ********* 2025-05-28 17:23:27.164171 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-28 17:23:27.164214 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-28 17:23:27.164240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-28 17:23:27.164272 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-28 17:23:27.164291 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-28 17:23:27.164325 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-28 17:23:27.164338 | orchestrator | 2025-05-28 17:23:27.164349 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-28 17:23:27.164360 | orchestrator | Wednesday 28 May 2025 17:20:26 +0000 (0:00:01.725) 0:00:03.804 ********* 2025-05-28 17:23:27.164370 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:23:27.164381 | orchestrator | 2025-05-28 17:23:27.164392 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-05-28 17:23:27.164403 | orchestrator | Wednesday 28 May 2025 17:20:27 +0000 (0:00:00.528) 0:00:04.333 ********* 2025-05-28 17:23:27.164423 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-28 17:23:27.164436 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-28 17:23:27.164452 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-28 17:23:27.164472 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-28 17:23:27.164490 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-28 17:23:27.164503 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-28 17:23:27.164514 | orchestrator | 2025-05-28 17:23:27.164525 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-05-28 17:23:27.164536 | orchestrator | Wednesday 28 May 2025 17:20:30 +0000 (0:00:02.522) 0:00:06.855 ********* 2025-05-28 17:23:27.164553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-28 17:23:27.164659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-28 17:23:27.164674 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:23:27.164686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-28 17:23:27.164705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-28 17:23:27.164716 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:23:27.164726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-28 17:23:27.164749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-28 17:23:27.164761 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:23:27.164770 | orchestrator | 2025-05-28 17:23:27.164780 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-05-28 17:23:27.164790 | orchestrator | Wednesday 28 May 2025 17:20:31 +0000 (0:00:01.452) 0:00:08.308 ********* 2025-05-28 17:23:27.164800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-28 17:23:27.164818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-28 17:23:27.164828 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:23:27.164838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-28 17:23:27.164874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-28 17:23:27.164886 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:23:27.164896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-28 17:23:27.164914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-28 17:23:27.164924 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:23:27.164934 | orchestrator | 2025-05-28 17:23:27.164943 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-05-28 17:23:27.164962 | orchestrator | Wednesday 28 May 2025 17:20:32 +0000 (0:00:00.784) 0:00:09.092 ********* 2025-05-28 17:23:27.164972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-28 17:23:27.164987 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-28 17:23:27.164998 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-28 17:23:27.165014 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-28 17:23:27.165025 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-28 17:23:27.165053 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-28 17:23:27.165064 | orchestrator | 2025-05-28 17:23:27.165073 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-05-28 17:23:27.165083 | orchestrator | Wednesday 28 May 2025 17:20:34 +0000 (0:00:02.323) 0:00:11.416 ********* 2025-05-28 17:23:27.165092 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:23:27.165102 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:23:27.165111 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:23:27.165120 | orchestrator | 2025-05-28 17:23:27.165130 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-05-28 17:23:27.165140 | orchestrator | Wednesday 28 May 2025 17:20:38 +0000 (0:00:04.001) 0:00:15.417 ********* 2025-05-28 17:23:27.165149 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:23:27.165159 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:23:27.165168 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:23:27.165177 | orchestrator | 2025-05-28 17:23:27.165186 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-05-28 17:23:27.165196 | orchestrator | Wednesday 28 May 2025 17:20:40 +0000 (0:00:01.678) 0:00:17.095 ********* 2025-05-28 17:23:27.165206 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-28 17:23:27.165222 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-28 17:23:27.165239 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-28 17:23:27.165295 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-28 17:23:27.165307 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-28 17:23:27.165325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-28 17:23:27.165343 | orchestrator | 2025-05-28 17:23:27.165353 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-28 17:23:27.165362 | orchestrator | Wednesday 28 May 2025 17:20:42 +0000 (0:00:02.231) 0:00:19.327 ********* 2025-05-28 17:23:27.165372 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:23:27.165381 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:23:27.165390 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:23:27.165400 | orchestrator | 2025-05-28 17:23:27.165409 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-05-28 17:23:27.165418 | orchestrator | Wednesday 28 May 2025 17:20:42 +0000 (0:00:00.285) 0:00:19.613 ********* 2025-05-28 17:23:27.165428 | orchestrator | 2025-05-28 17:23:27.165437 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-05-28 17:23:27.165446 | orchestrator | Wednesday 28 May 2025 17:20:42 +0000 (0:00:00.063) 0:00:19.677 ********* 2025-05-28 17:23:27.165456 | orchestrator | 2025-05-28 17:23:27.165465 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-05-28 17:23:27.165474 | orchestrator | Wednesday 28 May 2025 17:20:42 +0000 (0:00:00.063) 0:00:19.740 ********* 2025-05-28 17:23:27.165484 | orchestrator | 2025-05-28 17:23:27.165493 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-05-28 17:23:27.165502 | orchestrator | Wednesday 28 May 2025 17:20:43 +0000 (0:00:00.261) 0:00:20.002 ********* 2025-05-28 17:23:27.165511 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:23:27.165521 | orchestrator | 2025-05-28 17:23:27.165530 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-05-28 17:23:27.165540 | orchestrator | Wednesday 28 May 2025 17:20:43 +0000 (0:00:00.222) 0:00:20.224 ********* 2025-05-28 17:23:27.165549 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:23:27.165558 | orchestrator | 2025-05-28 17:23:27.165572 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-05-28 17:23:27.165582 | orchestrator | Wednesday 28 May 2025 17:20:43 +0000 (0:00:00.201) 0:00:20.426 ********* 2025-05-28 17:23:27.165592 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:23:27.165601 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:23:27.165610 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:23:27.165620 | orchestrator | 2025-05-28 17:23:27.165629 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-05-28 17:23:27.165639 | orchestrator | Wednesday 28 May 2025 17:21:56 +0000 (0:01:13.270) 0:01:33.697 ********* 2025-05-28 17:23:27.165648 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:23:27.165657 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:23:27.165666 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:23:27.165676 | orchestrator | 2025-05-28 17:23:27.165685 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-28 17:23:27.165694 | orchestrator | Wednesday 28 May 2025 17:23:14 +0000 (0:01:17.215) 0:02:50.913 ********* 2025-05-28 17:23:27.165704 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:23:27.165713 | orchestrator | 2025-05-28 17:23:27.165723 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-05-28 17:23:27.165732 | orchestrator | Wednesday 28 May 2025 17:23:14 +0000 (0:00:00.630) 0:02:51.543 ********* 2025-05-28 17:23:27.165741 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:23:27.165756 | orchestrator | 2025-05-28 17:23:27.165766 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-05-28 17:23:27.165775 | orchestrator | Wednesday 28 May 2025 17:23:16 +0000 (0:00:02.212) 0:02:53.756 ********* 2025-05-28 17:23:27.165785 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:23:27.165794 | orchestrator | 2025-05-28 17:23:27.165803 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-05-28 17:23:27.165812 | orchestrator | Wednesday 28 May 2025 17:23:19 +0000 (0:00:02.287) 0:02:56.044 ********* 2025-05-28 17:23:27.165822 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:23:27.165831 | orchestrator | 2025-05-28 17:23:27.165841 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-05-28 17:23:27.165850 | orchestrator | Wednesday 28 May 2025 17:23:21 +0000 (0:00:02.611) 0:02:58.655 ********* 2025-05-28 17:23:27.165859 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:23:27.165869 | orchestrator | 2025-05-28 17:23:27.165878 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 17:23:27.165889 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-28 17:23:27.165901 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-28 17:23:27.165910 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-28 17:23:27.165920 | orchestrator | 2025-05-28 17:23:27.165929 | orchestrator | 2025-05-28 17:23:27.165939 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 17:23:27.165952 | orchestrator | Wednesday 28 May 2025 17:23:24 +0000 (0:00:02.543) 0:03:01.199 ********* 2025-05-28 17:23:27.165962 | orchestrator | =============================================================================== 2025-05-28 17:23:27.165972 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 77.22s 2025-05-28 17:23:27.165981 | orchestrator | opensearch : Restart opensearch container ------------------------------ 73.27s 2025-05-28 17:23:27.165990 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 4.00s 2025-05-28 17:23:27.166000 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.61s 2025-05-28 17:23:27.166009 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.54s 2025-05-28 17:23:27.166068 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.52s 2025-05-28 17:23:27.166078 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.32s 2025-05-28 17:23:27.166088 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.29s 2025-05-28 17:23:27.166097 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.23s 2025-05-28 17:23:27.166107 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.21s 2025-05-28 17:23:27.166116 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.72s 2025-05-28 17:23:27.166126 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.68s 2025-05-28 17:23:27.166136 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.45s 2025-05-28 17:23:27.166145 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.78s 2025-05-28 17:23:27.166155 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.64s 2025-05-28 17:23:27.166164 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.63s 2025-05-28 17:23:27.166173 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.53s 2025-05-28 17:23:27.166183 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.51s 2025-05-28 17:23:27.166192 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.40s 2025-05-28 17:23:27.166213 | orchestrator | opensearch : Flush handlers --------------------------------------------- 0.39s 2025-05-28 17:23:27.166223 | orchestrator | 2025-05-28 17:23:27 | INFO  | Task 4fcc0b0c-bcde-4847-b04f-c856fbe593ed is in state STARTED 2025-05-28 17:23:27.166237 | orchestrator | 2025-05-28 17:23:27 | INFO  | Task 37cafdb3-9b68-47a1-a54a-4713396a7016 is in state STARTED 2025-05-28 17:23:27.166288 | orchestrator | 2025-05-28 17:23:27 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:23:30.206735 | orchestrator | 2025-05-28 17:23:30 | INFO  | Task 4fcc0b0c-bcde-4847-b04f-c856fbe593ed is in state STARTED 2025-05-28 17:23:30.206965 | orchestrator | 2025-05-28 17:23:30 | INFO  | Task 37cafdb3-9b68-47a1-a54a-4713396a7016 is in state STARTED 2025-05-28 17:23:30.206981 | orchestrator | 2025-05-28 17:23:30 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:23:33.265019 | orchestrator | 2025-05-28 17:23:33 | INFO  | Task 9d5dbdf6-3fee-4f13-9abc-0b267642dc71 is in state STARTED 2025-05-28 17:23:33.267657 | orchestrator | 2025-05-28 17:23:33 | INFO  | Task 4fcc0b0c-bcde-4847-b04f-c856fbe593ed is in state SUCCESS 2025-05-28 17:23:33.267873 | orchestrator | 2025-05-28 17:23:33.269804 | orchestrator | 2025-05-28 17:23:33.269841 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-05-28 17:23:33.269853 | orchestrator | 2025-05-28 17:23:33.269865 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-05-28 17:23:33.269877 | orchestrator | Wednesday 28 May 2025 17:20:23 +0000 (0:00:00.103) 0:00:00.103 ********* 2025-05-28 17:23:33.269889 | orchestrator | ok: [localhost] => { 2025-05-28 17:23:33.269901 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-05-28 17:23:33.269911 | orchestrator | } 2025-05-28 17:23:33.269921 | orchestrator | 2025-05-28 17:23:33.269931 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-05-28 17:23:33.269941 | orchestrator | Wednesday 28 May 2025 17:20:23 +0000 (0:00:00.055) 0:00:00.158 ********* 2025-05-28 17:23:33.269951 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-05-28 17:23:33.269963 | orchestrator | ...ignoring 2025-05-28 17:23:33.269973 | orchestrator | 2025-05-28 17:23:33.269983 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-05-28 17:23:33.269992 | orchestrator | Wednesday 28 May 2025 17:20:26 +0000 (0:00:02.869) 0:00:03.027 ********* 2025-05-28 17:23:33.270002 | orchestrator | skipping: [localhost] 2025-05-28 17:23:33.270326 | orchestrator | 2025-05-28 17:23:33.270340 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-05-28 17:23:33.270350 | orchestrator | Wednesday 28 May 2025 17:20:26 +0000 (0:00:00.061) 0:00:03.089 ********* 2025-05-28 17:23:33.270360 | orchestrator | ok: [localhost] 2025-05-28 17:23:33.270370 | orchestrator | 2025-05-28 17:23:33.270380 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-28 17:23:33.270389 | orchestrator | 2025-05-28 17:23:33.270399 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-28 17:23:33.270409 | orchestrator | Wednesday 28 May 2025 17:20:26 +0000 (0:00:00.147) 0:00:03.236 ********* 2025-05-28 17:23:33.270419 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:23:33.270429 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:23:33.270438 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:23:33.270448 | orchestrator | 2025-05-28 17:23:33.270458 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-28 17:23:33.270467 | orchestrator | Wednesday 28 May 2025 17:20:26 +0000 (0:00:00.285) 0:00:03.522 ********* 2025-05-28 17:23:33.270477 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-05-28 17:23:33.270487 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-05-28 17:23:33.270497 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-05-28 17:23:33.270535 | orchestrator | 2025-05-28 17:23:33.270546 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-05-28 17:23:33.270556 | orchestrator | 2025-05-28 17:23:33.270565 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-05-28 17:23:33.270575 | orchestrator | Wednesday 28 May 2025 17:20:27 +0000 (0:00:00.546) 0:00:04.069 ********* 2025-05-28 17:23:33.270585 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-28 17:23:33.270596 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-05-28 17:23:33.270605 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-05-28 17:23:33.270615 | orchestrator | 2025-05-28 17:23:33.270624 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-28 17:23:33.270634 | orchestrator | Wednesday 28 May 2025 17:20:27 +0000 (0:00:00.415) 0:00:04.485 ********* 2025-05-28 17:23:33.270644 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:23:33.270654 | orchestrator | 2025-05-28 17:23:33.270664 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-05-28 17:23:33.270673 | orchestrator | Wednesday 28 May 2025 17:20:28 +0000 (0:00:00.743) 0:00:05.228 ********* 2025-05-28 17:23:33.270720 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-28 17:23:33.270736 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-28 17:23:33.270762 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-28 17:23:33.270774 | orchestrator | 2025-05-28 17:23:33.270791 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-05-28 17:23:33.270802 | orchestrator | Wednesday 28 May 2025 17:20:31 +0000 (0:00:03.423) 0:00:08.651 ********* 2025-05-28 17:23:33.270811 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:23:33.270822 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:23:33.270831 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:23:33.270841 | orchestrator | 2025-05-28 17:23:33.270850 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-05-28 17:23:33.270860 | orchestrator | Wednesday 28 May 2025 17:20:32 +0000 (0:00:00.720) 0:00:09.372 ********* 2025-05-28 17:23:33.270869 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:23:33.270881 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:23:33.270891 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:23:33.270902 | orchestrator | 2025-05-28 17:23:33.270913 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-05-28 17:23:33.270924 | orchestrator | Wednesday 28 May 2025 17:20:34 +0000 (0:00:01.468) 0:00:10.840 ********* 2025-05-28 17:23:33.270936 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-28 17:23:33.270967 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-28 17:23:33.270981 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-28 17:23:33.270999 | orchestrator | 2025-05-28 17:23:33.271010 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-05-28 17:23:33.271021 | orchestrator | Wednesday 28 May 2025 17:20:38 +0000 (0:00:04.527) 0:00:15.368 ********* 2025-05-28 17:23:33.271032 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:23:33.271043 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:23:33.271054 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:23:33.271065 | orchestrator | 2025-05-28 17:23:33.271074 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-05-28 17:23:33.271084 | orchestrator | Wednesday 28 May 2025 17:20:39 +0000 (0:00:01.274) 0:00:16.643 ********* 2025-05-28 17:23:33.271093 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:23:33.271103 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:23:33.271112 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:23:33.271122 | orchestrator | 2025-05-28 17:23:33.271131 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-28 17:23:33.271141 | orchestrator | Wednesday 28 May 2025 17:20:44 +0000 (0:00:04.416) 0:00:21.059 ********* 2025-05-28 17:23:33.271229 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:23:33.271270 | orchestrator | 2025-05-28 17:23:33.271282 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-05-28 17:23:33.271292 | orchestrator | Wednesday 28 May 2025 17:20:45 +0000 (0:00:00.928) 0:00:21.987 ********* 2025-05-28 17:23:33.271318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-28 17:23:33.271337 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:23:33.271348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-28 17:23:33.271358 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:23:33.271423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-28 17:23:33.271444 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:23:33.271454 | orchestrator | 2025-05-28 17:23:33.271463 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-05-28 17:23:33.271473 | orchestrator | Wednesday 28 May 2025 17:20:47 +0000 (0:00:02.561) 0:00:24.549 ********* 2025-05-28 17:23:33.271483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-28 17:23:33.271494 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:23:33.271515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-28 17:23:33.271533 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:23:33.271543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-28 17:23:33.271554 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:23:33.271563 | orchestrator | 2025-05-28 17:23:33.271573 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-05-28 17:23:33.271582 | orchestrator | Wednesday 28 May 2025 17:20:50 +0000 (0:00:03.105) 0:00:27.654 ********* 2025-05-28 17:23:33.271598 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-28 17:23:33.271622 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:23:33.271640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-28 17:23:33.271651 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:23:33.271666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-28 17:23:33.271682 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:23:33.271692 | orchestrator | 2025-05-28 17:23:33.271702 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-05-28 17:23:33.271711 | orchestrator | Wednesday 28 May 2025 17:20:53 +0000 (0:00:02.417) 0:00:30.072 ********* 2025-05-28 17:23:33.271730 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-28 17:23:33.271747 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-28 17:23:33.271765 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-28 17:23:33.271784 | orchestrator | 2025-05-28 17:23:33.271794 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-05-28 17:23:33.271803 | orchestrator | Wednesday 28 May 2025 17:20:56 +0000 (0:00:03.363) 0:00:33.436 ********* 2025-05-28 17:23:33.271813 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:23:33.271822 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:23:33.271832 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:23:33.271841 | orchestrator | 2025-05-28 17:23:33.271851 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-05-28 17:23:33.271861 | orchestrator | Wednesday 28 May 2025 17:20:57 +0000 (0:00:01.010) 0:00:34.446 ********* 2025-05-28 17:23:33.271872 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:23:33.271882 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:23:33.271893 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:23:33.271903 | orchestrator | 2025-05-28 17:23:33.271914 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-05-28 17:23:33.271925 | orchestrator | Wednesday 28 May 2025 17:20:58 +0000 (0:00:00.339) 0:00:34.785 ********* 2025-05-28 17:23:33.271935 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:23:33.271946 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:23:33.271957 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:23:33.271968 | orchestrator | 2025-05-28 17:23:33.271979 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-05-28 17:23:33.271989 | orchestrator | Wednesday 28 May 2025 17:20:58 +0000 (0:00:00.350) 0:00:35.136 ********* 2025-05-28 17:23:33.272001 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-05-28 17:23:33.272013 | orchestrator | ...ignoring 2025-05-28 17:23:33.272024 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-05-28 17:23:33.272035 | orchestrator | ...ignoring 2025-05-28 17:23:33.272046 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-05-28 17:23:33.272056 | orchestrator | ...ignoring 2025-05-28 17:23:33.272067 | orchestrator | 2025-05-28 17:23:33.272077 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-05-28 17:23:33.272109 | orchestrator | Wednesday 28 May 2025 17:21:09 +0000 (0:00:10.956) 0:00:46.093 ********* 2025-05-28 17:23:33.272120 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:23:33.272131 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:23:33.272142 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:23:33.272153 | orchestrator | 2025-05-28 17:23:33.272164 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-05-28 17:23:33.272175 | orchestrator | Wednesday 28 May 2025 17:21:09 +0000 (0:00:00.674) 0:00:46.767 ********* 2025-05-28 17:23:33.272185 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:23:33.272196 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:23:33.272207 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:23:33.272218 | orchestrator | 2025-05-28 17:23:33.272229 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-05-28 17:23:33.272238 | orchestrator | Wednesday 28 May 2025 17:21:10 +0000 (0:00:00.426) 0:00:47.193 ********* 2025-05-28 17:23:33.272283 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:23:33.272306 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:23:33.272324 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:23:33.272340 | orchestrator | 2025-05-28 17:23:33.272351 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-05-28 17:23:33.272360 | orchestrator | Wednesday 28 May 2025 17:21:10 +0000 (0:00:00.392) 0:00:47.586 ********* 2025-05-28 17:23:33.272370 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:23:33.272379 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:23:33.272388 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:23:33.272399 | orchestrator | 2025-05-28 17:23:33.272409 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-05-28 17:23:33.272420 | orchestrator | Wednesday 28 May 2025 17:21:11 +0000 (0:00:00.403) 0:00:47.990 ********* 2025-05-28 17:23:33.272431 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:23:33.272441 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:23:33.272452 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:23:33.272463 | orchestrator | 2025-05-28 17:23:33.272473 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-05-28 17:23:33.272484 | orchestrator | Wednesday 28 May 2025 17:21:11 +0000 (0:00:00.605) 0:00:48.595 ********* 2025-05-28 17:23:33.272501 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:23:33.272513 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:23:33.272524 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:23:33.272535 | orchestrator | 2025-05-28 17:23:33.272545 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-28 17:23:33.272556 | orchestrator | Wednesday 28 May 2025 17:21:12 +0000 (0:00:00.391) 0:00:48.987 ********* 2025-05-28 17:23:33.272567 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:23:33.272577 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:23:33.272588 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-05-28 17:23:33.272599 | orchestrator | 2025-05-28 17:23:33.272609 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-05-28 17:23:33.272620 | orchestrator | Wednesday 28 May 2025 17:21:12 +0000 (0:00:00.380) 0:00:49.368 ********* 2025-05-28 17:23:33.272631 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:23:33.272641 | orchestrator | 2025-05-28 17:23:33.272652 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-05-28 17:23:33.272663 | orchestrator | Wednesday 28 May 2025 17:21:23 +0000 (0:00:10.495) 0:00:59.863 ********* 2025-05-28 17:23:33.272673 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:23:33.272684 | orchestrator | 2025-05-28 17:23:33.272695 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-28 17:23:33.272705 | orchestrator | Wednesday 28 May 2025 17:21:23 +0000 (0:00:00.134) 0:00:59.997 ********* 2025-05-28 17:23:33.272716 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:23:33.272727 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:23:33.272745 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:23:33.272756 | orchestrator | 2025-05-28 17:23:33.272767 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-05-28 17:23:33.272778 | orchestrator | Wednesday 28 May 2025 17:21:24 +0000 (0:00:00.975) 0:01:00.973 ********* 2025-05-28 17:23:33.272789 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:23:33.272799 | orchestrator | 2025-05-28 17:23:33.272810 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-05-28 17:23:33.272821 | orchestrator | Wednesday 28 May 2025 17:21:31 +0000 (0:00:07.287) 0:01:08.260 ********* 2025-05-28 17:23:33.272832 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:23:33.272842 | orchestrator | 2025-05-28 17:23:33.272853 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-05-28 17:23:33.272864 | orchestrator | Wednesday 28 May 2025 17:21:33 +0000 (0:00:01.536) 0:01:09.797 ********* 2025-05-28 17:23:33.272874 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:23:33.272885 | orchestrator | 2025-05-28 17:23:33.272896 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-05-28 17:23:33.272907 | orchestrator | Wednesday 28 May 2025 17:21:35 +0000 (0:00:02.255) 0:01:12.052 ********* 2025-05-28 17:23:33.272917 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:23:33.272928 | orchestrator | 2025-05-28 17:23:33.272939 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-05-28 17:23:33.272950 | orchestrator | Wednesday 28 May 2025 17:21:35 +0000 (0:00:00.117) 0:01:12.169 ********* 2025-05-28 17:23:33.272961 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:23:33.272971 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:23:33.272982 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:23:33.272993 | orchestrator | 2025-05-28 17:23:33.273003 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-05-28 17:23:33.273014 | orchestrator | Wednesday 28 May 2025 17:21:35 +0000 (0:00:00.514) 0:01:12.684 ********* 2025-05-28 17:23:33.273025 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:23:33.273035 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-05-28 17:23:33.273046 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:23:33.273057 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:23:33.273067 | orchestrator | 2025-05-28 17:23:33.273078 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-05-28 17:23:33.273089 | orchestrator | skipping: no hosts matched 2025-05-28 17:23:33.273099 | orchestrator | 2025-05-28 17:23:33.273110 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-05-28 17:23:33.273121 | orchestrator | 2025-05-28 17:23:33.273131 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-05-28 17:23:33.273142 | orchestrator | Wednesday 28 May 2025 17:21:36 +0000 (0:00:00.330) 0:01:13.014 ********* 2025-05-28 17:23:33.273153 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:23:33.273164 | orchestrator | 2025-05-28 17:23:33.273174 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-05-28 17:23:33.273185 | orchestrator | Wednesday 28 May 2025 17:21:54 +0000 (0:00:18.678) 0:01:31.693 ********* 2025-05-28 17:23:33.273196 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:23:33.273207 | orchestrator | 2025-05-28 17:23:33.273217 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-05-28 17:23:33.273228 | orchestrator | Wednesday 28 May 2025 17:22:15 +0000 (0:00:20.642) 0:01:52.335 ********* 2025-05-28 17:23:33.273412 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:23:33.273434 | orchestrator | 2025-05-28 17:23:33.273446 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-05-28 17:23:33.273456 | orchestrator | 2025-05-28 17:23:33.273467 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-05-28 17:23:33.273477 | orchestrator | Wednesday 28 May 2025 17:22:17 +0000 (0:00:02.417) 0:01:54.753 ********* 2025-05-28 17:23:33.273488 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:23:33.273508 | orchestrator | 2025-05-28 17:23:33.273518 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-05-28 17:23:33.273529 | orchestrator | Wednesday 28 May 2025 17:22:38 +0000 (0:00:20.308) 0:02:15.062 ********* 2025-05-28 17:23:33.273539 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:23:33.273550 | orchestrator | 2025-05-28 17:23:33.273560 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-05-28 17:23:33.273571 | orchestrator | Wednesday 28 May 2025 17:22:58 +0000 (0:00:20.557) 0:02:35.619 ********* 2025-05-28 17:23:33.273582 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:23:33.273592 | orchestrator | 2025-05-28 17:23:33.273603 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-05-28 17:23:33.273613 | orchestrator | 2025-05-28 17:23:33.273694 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-05-28 17:23:33.273706 | orchestrator | Wednesday 28 May 2025 17:23:01 +0000 (0:00:02.809) 0:02:38.428 ********* 2025-05-28 17:23:33.273716 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:23:33.273727 | orchestrator | 2025-05-28 17:23:33.273738 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-05-28 17:23:33.273748 | orchestrator | Wednesday 28 May 2025 17:23:16 +0000 (0:00:15.033) 0:02:53.461 ********* 2025-05-28 17:23:33.273759 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:23:33.273769 | orchestrator | 2025-05-28 17:23:33.273780 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-05-28 17:23:33.273791 | orchestrator | Wednesday 28 May 2025 17:23:17 +0000 (0:00:00.505) 0:02:53.967 ********* 2025-05-28 17:23:33.273801 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:23:33.273812 | orchestrator | 2025-05-28 17:23:33.273822 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-05-28 17:23:33.273831 | orchestrator | 2025-05-28 17:23:33.273841 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-05-28 17:23:33.273850 | orchestrator | Wednesday 28 May 2025 17:23:19 +0000 (0:00:02.277) 0:02:56.244 ********* 2025-05-28 17:23:33.273860 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:23:33.273869 | orchestrator | 2025-05-28 17:23:33.273879 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-05-28 17:23:33.273888 | orchestrator | Wednesday 28 May 2025 17:23:19 +0000 (0:00:00.513) 0:02:56.758 ********* 2025-05-28 17:23:33.273898 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:23:33.273907 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:23:33.273916 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:23:33.273926 | orchestrator | 2025-05-28 17:23:33.273935 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-05-28 17:23:33.273945 | orchestrator | Wednesday 28 May 2025 17:23:22 +0000 (0:00:02.436) 0:02:59.195 ********* 2025-05-28 17:23:33.273954 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:23:33.273964 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:23:33.273973 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:23:33.273982 | orchestrator | 2025-05-28 17:23:33.273992 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-05-28 17:23:33.274001 | orchestrator | Wednesday 28 May 2025 17:23:24 +0000 (0:00:02.050) 0:03:01.246 ********* 2025-05-28 17:23:33.274011 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:23:33.274052 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:23:33.274061 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:23:33.274070 | orchestrator | 2025-05-28 17:23:33.274080 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-05-28 17:23:33.274089 | orchestrator | Wednesday 28 May 2025 17:23:26 +0000 (0:00:02.009) 0:03:03.255 ********* 2025-05-28 17:23:33.274099 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:23:33.274108 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:23:33.274117 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:23:33.274127 | orchestrator | 2025-05-28 17:23:33.274136 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-05-28 17:23:33.274153 | orchestrator | Wednesday 28 May 2025 17:23:28 +0000 (0:00:01.938) 0:03:05.194 ********* 2025-05-28 17:23:33.274163 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:23:33.274172 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:23:33.274182 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:23:33.274191 | orchestrator | 2025-05-28 17:23:33.274200 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-05-28 17:23:33.274210 | orchestrator | Wednesday 28 May 2025 17:23:31 +0000 (0:00:02.897) 0:03:08.091 ********* 2025-05-28 17:23:33.274220 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:23:33.274229 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:23:33.274238 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:23:33.274269 | orchestrator | 2025-05-28 17:23:33.274279 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 17:23:33.274289 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-05-28 17:23:33.274299 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2025-05-28 17:23:33.274311 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-05-28 17:23:33.274323 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-05-28 17:23:33.274333 | orchestrator | 2025-05-28 17:23:33.274344 | orchestrator | 2025-05-28 17:23:33.274360 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 17:23:33.274371 | orchestrator | Wednesday 28 May 2025 17:23:31 +0000 (0:00:00.226) 0:03:08.318 ********* 2025-05-28 17:23:33.274381 | orchestrator | =============================================================================== 2025-05-28 17:23:33.274392 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 41.20s 2025-05-28 17:23:33.274402 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 38.99s 2025-05-28 17:23:33.274414 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 15.03s 2025-05-28 17:23:33.274424 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.96s 2025-05-28 17:23:33.274434 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.50s 2025-05-28 17:23:33.274445 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.29s 2025-05-28 17:23:33.274462 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.23s 2025-05-28 17:23:33.274473 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.53s 2025-05-28 17:23:33.274484 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.42s 2025-05-28 17:23:33.274494 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.42s 2025-05-28 17:23:33.274505 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.36s 2025-05-28 17:23:33.274515 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 3.11s 2025-05-28 17:23:33.274526 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.90s 2025-05-28 17:23:33.274537 | orchestrator | Check MariaDB service --------------------------------------------------- 2.87s 2025-05-28 17:23:33.274548 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 2.56s 2025-05-28 17:23:33.274559 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.44s 2025-05-28 17:23:33.274569 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.42s 2025-05-28 17:23:33.274579 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.28s 2025-05-28 17:23:33.274601 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.26s 2025-05-28 17:23:33.274613 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 2.05s 2025-05-28 17:23:33.274623 | orchestrator | 2025-05-28 17:23:33 | INFO  | Task 37cafdb3-9b68-47a1-a54a-4713396a7016 is in state STARTED 2025-05-28 17:23:33.274634 | orchestrator | 2025-05-28 17:23:33 | INFO  | Task 0857faf7-466d-470d-8183-6e64a7d62bfe is in state STARTED 2025-05-28 17:23:33.274645 | orchestrator | 2025-05-28 17:23:33 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:23:36.326564 | orchestrator | 2025-05-28 17:23:36 | INFO  | Task 9d5dbdf6-3fee-4f13-9abc-0b267642dc71 is in state STARTED 2025-05-28 17:23:36.327659 | orchestrator | 2025-05-28 17:23:36 | INFO  | Task 37cafdb3-9b68-47a1-a54a-4713396a7016 is in state STARTED 2025-05-28 17:23:36.328650 | orchestrator | 2025-05-28 17:23:36 | INFO  | Task 0857faf7-466d-470d-8183-6e64a7d62bfe is in state STARTED 2025-05-28 17:23:36.328776 | orchestrator | 2025-05-28 17:23:36 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:23:39.373820 | orchestrator | 2025-05-28 17:23:39 | INFO  | Task 9d5dbdf6-3fee-4f13-9abc-0b267642dc71 is in state STARTED 2025-05-28 17:23:39.374525 | orchestrator | 2025-05-28 17:23:39 | INFO  | Task 37cafdb3-9b68-47a1-a54a-4713396a7016 is in state STARTED 2025-05-28 17:23:39.377505 | orchestrator | 2025-05-28 17:23:39 | INFO  | Task 0857faf7-466d-470d-8183-6e64a7d62bfe is in state STARTED 2025-05-28 17:23:39.377544 | orchestrator | 2025-05-28 17:23:39 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:23:42.426908 | orchestrator | 2025-05-28 17:23:42 | INFO  | Task 9d5dbdf6-3fee-4f13-9abc-0b267642dc71 is in state STARTED 2025-05-28 17:23:42.427668 | orchestrator | 2025-05-28 17:23:42 | INFO  | Task 37cafdb3-9b68-47a1-a54a-4713396a7016 is in state STARTED 2025-05-28 17:23:42.430704 | orchestrator | 2025-05-28 17:23:42 | INFO  | Task 0857faf7-466d-470d-8183-6e64a7d62bfe is in state STARTED 2025-05-28 17:23:42.430737 | orchestrator | 2025-05-28 17:23:42 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:23:45.470946 | orchestrator | 2025-05-28 17:23:45 | INFO  | Task 9d5dbdf6-3fee-4f13-9abc-0b267642dc71 is in state STARTED 2025-05-28 17:23:45.471093 | orchestrator | 2025-05-28 17:23:45 | INFO  | Task 37cafdb3-9b68-47a1-a54a-4713396a7016 is in state STARTED 2025-05-28 17:23:45.474290 | orchestrator | 2025-05-28 17:23:45 | INFO  | Task 0857faf7-466d-470d-8183-6e64a7d62bfe is in state STARTED 2025-05-28 17:23:45.474340 | orchestrator | 2025-05-28 17:23:45 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:23:48.515420 | orchestrator | 2025-05-28 17:23:48 | INFO  | Task 9d5dbdf6-3fee-4f13-9abc-0b267642dc71 is in state STARTED 2025-05-28 17:23:48.517225 | orchestrator | 2025-05-28 17:23:48 | INFO  | Task 37cafdb3-9b68-47a1-a54a-4713396a7016 is in state STARTED 2025-05-28 17:23:48.519723 | orchestrator | 2025-05-28 17:23:48 | INFO  | Task 0857faf7-466d-470d-8183-6e64a7d62bfe is in state STARTED 2025-05-28 17:23:48.519781 | orchestrator | 2025-05-28 17:23:48 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:23:51.571586 | orchestrator | 2025-05-28 17:23:51 | INFO  | Task 9d5dbdf6-3fee-4f13-9abc-0b267642dc71 is in state STARTED 2025-05-28 17:23:51.572128 | orchestrator | 2025-05-28 17:23:51 | INFO  | Task 37cafdb3-9b68-47a1-a54a-4713396a7016 is in state STARTED 2025-05-28 17:23:51.573716 | orchestrator | 2025-05-28 17:23:51 | INFO  | Task 0857faf7-466d-470d-8183-6e64a7d62bfe is in state STARTED 2025-05-28 17:23:51.573756 | orchestrator | 2025-05-28 17:23:51 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:23:54.612981 | orchestrator | 2025-05-28 17:23:54 | INFO  | Task 9d5dbdf6-3fee-4f13-9abc-0b267642dc71 is in state STARTED 2025-05-28 17:23:54.614191 | orchestrator | 2025-05-28 17:23:54 | INFO  | Task 37cafdb3-9b68-47a1-a54a-4713396a7016 is in state STARTED 2025-05-28 17:23:54.615933 | orchestrator | 2025-05-28 17:23:54 | INFO  | Task 0857faf7-466d-470d-8183-6e64a7d62bfe is in state STARTED 2025-05-28 17:23:54.615963 | orchestrator | 2025-05-28 17:23:54 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:23:57.656849 | orchestrator | 2025-05-28 17:23:57 | INFO  | Task 9d5dbdf6-3fee-4f13-9abc-0b267642dc71 is in state STARTED 2025-05-28 17:23:57.656983 | orchestrator | 2025-05-28 17:23:57 | INFO  | Task 37cafdb3-9b68-47a1-a54a-4713396a7016 is in state STARTED 2025-05-28 17:23:57.658712 | orchestrator | 2025-05-28 17:23:57 | INFO  | Task 0857faf7-466d-470d-8183-6e64a7d62bfe is in state STARTED 2025-05-28 17:23:57.658812 | orchestrator | 2025-05-28 17:23:57 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:24:00.693123 | orchestrator | 2025-05-28 17:24:00 | INFO  | Task 9d5dbdf6-3fee-4f13-9abc-0b267642dc71 is in state STARTED 2025-05-28 17:24:00.693339 | orchestrator | 2025-05-28 17:24:00 | INFO  | Task 37cafdb3-9b68-47a1-a54a-4713396a7016 is in state STARTED 2025-05-28 17:24:00.694406 | orchestrator | 2025-05-28 17:24:00 | INFO  | Task 0857faf7-466d-470d-8183-6e64a7d62bfe is in state STARTED 2025-05-28 17:24:00.694434 | orchestrator | 2025-05-28 17:24:00 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:24:03.732167 | orchestrator | 2025-05-28 17:24:03 | INFO  | Task 9d5dbdf6-3fee-4f13-9abc-0b267642dc71 is in state STARTED 2025-05-28 17:24:03.733993 | orchestrator | 2025-05-28 17:24:03 | INFO  | Task 37cafdb3-9b68-47a1-a54a-4713396a7016 is in state STARTED 2025-05-28 17:24:03.735124 | orchestrator | 2025-05-28 17:24:03 | INFO  | Task 0857faf7-466d-470d-8183-6e64a7d62bfe is in state STARTED 2025-05-28 17:24:03.735334 | orchestrator | 2025-05-28 17:24:03 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:24:06.800674 | orchestrator | 2025-05-28 17:24:06 | INFO  | Task 9d5dbdf6-3fee-4f13-9abc-0b267642dc71 is in state STARTED 2025-05-28 17:24:06.802264 | orchestrator | 2025-05-28 17:24:06 | INFO  | Task 37cafdb3-9b68-47a1-a54a-4713396a7016 is in state STARTED 2025-05-28 17:24:06.803743 | orchestrator | 2025-05-28 17:24:06 | INFO  | Task 0857faf7-466d-470d-8183-6e64a7d62bfe is in state STARTED 2025-05-28 17:24:06.803770 | orchestrator | 2025-05-28 17:24:06 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:24:09.854942 | orchestrator | 2025-05-28 17:24:09 | INFO  | Task 9d5dbdf6-3fee-4f13-9abc-0b267642dc71 is in state STARTED 2025-05-28 17:24:09.855357 | orchestrator | 2025-05-28 17:24:09 | INFO  | Task 37cafdb3-9b68-47a1-a54a-4713396a7016 is in state STARTED 2025-05-28 17:24:09.857626 | orchestrator | 2025-05-28 17:24:09 | INFO  | Task 0857faf7-466d-470d-8183-6e64a7d62bfe is in state STARTED 2025-05-28 17:24:09.858107 | orchestrator | 2025-05-28 17:24:09 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:24:12.900879 | orchestrator | 2025-05-28 17:24:12 | INFO  | Task 9d5dbdf6-3fee-4f13-9abc-0b267642dc71 is in state STARTED 2025-05-28 17:24:12.903658 | orchestrator | 2025-05-28 17:24:12 | INFO  | Task 37cafdb3-9b68-47a1-a54a-4713396a7016 is in state STARTED 2025-05-28 17:24:12.905457 | orchestrator | 2025-05-28 17:24:12 | INFO  | Task 0857faf7-466d-470d-8183-6e64a7d62bfe is in state STARTED 2025-05-28 17:24:12.905573 | orchestrator | 2025-05-28 17:24:12 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:24:15.953886 | orchestrator | 2025-05-28 17:24:15 | INFO  | Task 9d5dbdf6-3fee-4f13-9abc-0b267642dc71 is in state STARTED 2025-05-28 17:24:15.955331 | orchestrator | 2025-05-28 17:24:15 | INFO  | Task 37cafdb3-9b68-47a1-a54a-4713396a7016 is in state STARTED 2025-05-28 17:24:15.957086 | orchestrator | 2025-05-28 17:24:15 | INFO  | Task 0857faf7-466d-470d-8183-6e64a7d62bfe is in state STARTED 2025-05-28 17:24:15.957480 | orchestrator | 2025-05-28 17:24:15 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:24:19.012085 | orchestrator | 2025-05-28 17:24:19 | INFO  | Task 9d5dbdf6-3fee-4f13-9abc-0b267642dc71 is in state STARTED 2025-05-28 17:24:19.013526 | orchestrator | 2025-05-28 17:24:19 | INFO  | Task 37cafdb3-9b68-47a1-a54a-4713396a7016 is in state STARTED 2025-05-28 17:24:19.015719 | orchestrator | 2025-05-28 17:24:19 | INFO  | Task 0857faf7-466d-470d-8183-6e64a7d62bfe is in state STARTED 2025-05-28 17:24:19.015806 | orchestrator | 2025-05-28 17:24:19 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:24:22.059830 | orchestrator | 2025-05-28 17:24:22 | INFO  | Task 9d5dbdf6-3fee-4f13-9abc-0b267642dc71 is in state STARTED 2025-05-28 17:24:22.060311 | orchestrator | 2025-05-28 17:24:22 | INFO  | Task 37cafdb3-9b68-47a1-a54a-4713396a7016 is in state STARTED 2025-05-28 17:24:22.061089 | orchestrator | 2025-05-28 17:24:22 | INFO  | Task 0857faf7-466d-470d-8183-6e64a7d62bfe is in state STARTED 2025-05-28 17:24:22.061123 | orchestrator | 2025-05-28 17:24:22 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:24:25.113600 | orchestrator | 2025-05-28 17:24:25 | INFO  | Task 9d5dbdf6-3fee-4f13-9abc-0b267642dc71 is in state STARTED 2025-05-28 17:24:25.114569 | orchestrator | 2025-05-28 17:24:25 | INFO  | Task 37cafdb3-9b68-47a1-a54a-4713396a7016 is in state STARTED 2025-05-28 17:24:25.117338 | orchestrator | 2025-05-28 17:24:25 | INFO  | Task 0857faf7-466d-470d-8183-6e64a7d62bfe is in state STARTED 2025-05-28 17:24:25.117350 | orchestrator | 2025-05-28 17:24:25 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:24:28.170994 | orchestrator | 2025-05-28 17:24:28 | INFO  | Task 9d5dbdf6-3fee-4f13-9abc-0b267642dc71 is in state STARTED 2025-05-28 17:24:28.174293 | orchestrator | 2025-05-28 17:24:28 | INFO  | Task 37cafdb3-9b68-47a1-a54a-4713396a7016 is in state STARTED 2025-05-28 17:24:28.176753 | orchestrator | 2025-05-28 17:24:28 | INFO  | Task 0857faf7-466d-470d-8183-6e64a7d62bfe is in state STARTED 2025-05-28 17:24:28.176782 | orchestrator | 2025-05-28 17:24:28 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:24:31.226963 | orchestrator | 2025-05-28 17:24:31 | INFO  | Task 9d5dbdf6-3fee-4f13-9abc-0b267642dc71 is in state STARTED 2025-05-28 17:24:31.227986 | orchestrator | 2025-05-28 17:24:31 | INFO  | Task 37cafdb3-9b68-47a1-a54a-4713396a7016 is in state STARTED 2025-05-28 17:24:31.231095 | orchestrator | 2025-05-28 17:24:31 | INFO  | Task 0857faf7-466d-470d-8183-6e64a7d62bfe is in state STARTED 2025-05-28 17:24:31.231170 | orchestrator | 2025-05-28 17:24:31 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:24:34.282577 | orchestrator | 2025-05-28 17:24:34 | INFO  | Task 9d5dbdf6-3fee-4f13-9abc-0b267642dc71 is in state STARTED 2025-05-28 17:24:34.283732 | orchestrator | 2025-05-28 17:24:34 | INFO  | Task 37cafdb3-9b68-47a1-a54a-4713396a7016 is in state STARTED 2025-05-28 17:24:34.285458 | orchestrator | 2025-05-28 17:24:34 | INFO  | Task 0857faf7-466d-470d-8183-6e64a7d62bfe is in state STARTED 2025-05-28 17:24:34.285492 | orchestrator | 2025-05-28 17:24:34 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:24:37.336451 | orchestrator | 2025-05-28 17:24:37 | INFO  | Task 9d5dbdf6-3fee-4f13-9abc-0b267642dc71 is in state STARTED 2025-05-28 17:24:37.340964 | orchestrator | 2025-05-28 17:24:37 | INFO  | Task 37cafdb3-9b68-47a1-a54a-4713396a7016 is in state STARTED 2025-05-28 17:24:37.342756 | orchestrator | 2025-05-28 17:24:37 | INFO  | Task 0857faf7-466d-470d-8183-6e64a7d62bfe is in state STARTED 2025-05-28 17:24:37.343305 | orchestrator | 2025-05-28 17:24:37 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:24:40.392956 | orchestrator | 2025-05-28 17:24:40 | INFO  | Task 9d5dbdf6-3fee-4f13-9abc-0b267642dc71 is in state STARTED 2025-05-28 17:24:40.394146 | orchestrator | 2025-05-28 17:24:40 | INFO  | Task 37cafdb3-9b68-47a1-a54a-4713396a7016 is in state STARTED 2025-05-28 17:24:40.395595 | orchestrator | 2025-05-28 17:24:40 | INFO  | Task 0857faf7-466d-470d-8183-6e64a7d62bfe is in state STARTED 2025-05-28 17:24:40.395624 | orchestrator | 2025-05-28 17:24:40 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:24:43.448641 | orchestrator | 2025-05-28 17:24:43 | INFO  | Task 9d5dbdf6-3fee-4f13-9abc-0b267642dc71 is in state STARTED 2025-05-28 17:24:43.452975 | orchestrator | 2025-05-28 17:24:43 | INFO  | Task 37cafdb3-9b68-47a1-a54a-4713396a7016 is in state SUCCESS 2025-05-28 17:24:43.455667 | orchestrator | 2025-05-28 17:24:43.455712 | orchestrator | 2025-05-28 17:24:43.455725 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-05-28 17:24:43.455738 | orchestrator | 2025-05-28 17:24:43.455749 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-05-28 17:24:43.455760 | orchestrator | Wednesday 28 May 2025 17:22:35 +0000 (0:00:00.677) 0:00:00.677 ********* 2025-05-28 17:24:43.455771 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 17:24:43.455905 | orchestrator | 2025-05-28 17:24:43.455923 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-05-28 17:24:43.455936 | orchestrator | Wednesday 28 May 2025 17:22:35 +0000 (0:00:00.639) 0:00:01.316 ********* 2025-05-28 17:24:43.455947 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:24:43.455960 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:24:43.455971 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:24:43.455982 | orchestrator | 2025-05-28 17:24:43.455993 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-05-28 17:24:43.456005 | orchestrator | Wednesday 28 May 2025 17:22:36 +0000 (0:00:00.618) 0:00:01.935 ********* 2025-05-28 17:24:43.456016 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:24:43.456027 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:24:43.456038 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:24:43.456050 | orchestrator | 2025-05-28 17:24:43.456061 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-05-28 17:24:43.456072 | orchestrator | Wednesday 28 May 2025 17:22:36 +0000 (0:00:00.278) 0:00:02.214 ********* 2025-05-28 17:24:43.456083 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:24:43.456609 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:24:43.456679 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:24:43.456692 | orchestrator | 2025-05-28 17:24:43.456703 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-05-28 17:24:43.456714 | orchestrator | Wednesday 28 May 2025 17:22:37 +0000 (0:00:00.776) 0:00:02.990 ********* 2025-05-28 17:24:43.456791 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:24:43.456806 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:24:43.456817 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:24:43.456827 | orchestrator | 2025-05-28 17:24:43.456838 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-05-28 17:24:43.456849 | orchestrator | Wednesday 28 May 2025 17:22:37 +0000 (0:00:00.292) 0:00:03.282 ********* 2025-05-28 17:24:43.456859 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:24:43.456870 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:24:43.456881 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:24:43.457157 | orchestrator | 2025-05-28 17:24:43.457170 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-05-28 17:24:43.457242 | orchestrator | Wednesday 28 May 2025 17:22:38 +0000 (0:00:00.269) 0:00:03.552 ********* 2025-05-28 17:24:43.457257 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:24:43.457267 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:24:43.457278 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:24:43.457289 | orchestrator | 2025-05-28 17:24:43.457300 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-05-28 17:24:43.457310 | orchestrator | Wednesday 28 May 2025 17:22:38 +0000 (0:00:00.290) 0:00:03.842 ********* 2025-05-28 17:24:43.457321 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:24:43.457333 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:24:43.457343 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:24:43.457353 | orchestrator | 2025-05-28 17:24:43.457365 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-05-28 17:24:43.457381 | orchestrator | Wednesday 28 May 2025 17:22:38 +0000 (0:00:00.448) 0:00:04.291 ********* 2025-05-28 17:24:43.457400 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:24:43.457416 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:24:43.457433 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:24:43.457451 | orchestrator | 2025-05-28 17:24:43.457470 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-05-28 17:24:43.457486 | orchestrator | Wednesday 28 May 2025 17:22:39 +0000 (0:00:00.269) 0:00:04.561 ********* 2025-05-28 17:24:43.457497 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-28 17:24:43.457508 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-28 17:24:43.457519 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-28 17:24:43.457529 | orchestrator | 2025-05-28 17:24:43.457540 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-05-28 17:24:43.457550 | orchestrator | Wednesday 28 May 2025 17:22:39 +0000 (0:00:00.649) 0:00:05.210 ********* 2025-05-28 17:24:43.457561 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:24:43.457571 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:24:43.457582 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:24:43.457592 | orchestrator | 2025-05-28 17:24:43.457602 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-05-28 17:24:43.457613 | orchestrator | Wednesday 28 May 2025 17:22:40 +0000 (0:00:00.406) 0:00:05.617 ********* 2025-05-28 17:24:43.457623 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-28 17:24:43.457634 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-28 17:24:43.457644 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-28 17:24:43.457655 | orchestrator | 2025-05-28 17:24:43.457665 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-05-28 17:24:43.457676 | orchestrator | Wednesday 28 May 2025 17:22:42 +0000 (0:00:02.205) 0:00:07.822 ********* 2025-05-28 17:24:43.457686 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-28 17:24:43.457697 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-28 17:24:43.457707 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-28 17:24:43.457779 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:24:43.457793 | orchestrator | 2025-05-28 17:24:43.457805 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-05-28 17:24:43.457870 | orchestrator | Wednesday 28 May 2025 17:22:42 +0000 (0:00:00.386) 0:00:08.209 ********* 2025-05-28 17:24:43.457888 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-28 17:24:43.457904 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-28 17:24:43.457931 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-28 17:24:43.457944 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:24:43.457956 | orchestrator | 2025-05-28 17:24:43.457973 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-05-28 17:24:43.457992 | orchestrator | Wednesday 28 May 2025 17:22:43 +0000 (0:00:00.761) 0:00:08.971 ********* 2025-05-28 17:24:43.458013 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-28 17:24:43.458106 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-28 17:24:43.458127 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-28 17:24:43.458146 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:24:43.458162 | orchestrator | 2025-05-28 17:24:43.458173 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-05-28 17:24:43.458183 | orchestrator | Wednesday 28 May 2025 17:22:43 +0000 (0:00:00.161) 0:00:09.132 ********* 2025-05-28 17:24:43.458196 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'a943adc42ebb', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-05-28 17:22:40.873027', 'end': '2025-05-28 17:22:40.928338', 'delta': '0:00:00.055311', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a943adc42ebb'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-05-28 17:24:43.458273 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '2f45b7c1f94b', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-05-28 17:22:41.611630', 'end': '2025-05-28 17:22:41.651328', 'delta': '0:00:00.039698', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['2f45b7c1f94b'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-05-28 17:24:43.458357 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'e23eda19bf4f', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-05-28 17:22:42.254048', 'end': '2025-05-28 17:22:42.302988', 'delta': '0:00:00.048940', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['e23eda19bf4f'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-05-28 17:24:43.458536 | orchestrator | 2025-05-28 17:24:43.458559 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-05-28 17:24:43.458579 | orchestrator | Wednesday 28 May 2025 17:22:44 +0000 (0:00:00.345) 0:00:09.477 ********* 2025-05-28 17:24:43.458598 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:24:43.458613 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:24:43.458624 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:24:43.458634 | orchestrator | 2025-05-28 17:24:43.458645 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-05-28 17:24:43.458655 | orchestrator | Wednesday 28 May 2025 17:22:44 +0000 (0:00:00.406) 0:00:09.884 ********* 2025-05-28 17:24:43.458666 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-05-28 17:24:43.458677 | orchestrator | 2025-05-28 17:24:43.458687 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-05-28 17:24:43.458698 | orchestrator | Wednesday 28 May 2025 17:22:46 +0000 (0:00:01.686) 0:00:11.571 ********* 2025-05-28 17:24:43.458709 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:24:43.458720 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:24:43.458815 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:24:43.458833 | orchestrator | 2025-05-28 17:24:43.458843 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-05-28 17:24:43.458854 | orchestrator | Wednesday 28 May 2025 17:22:46 +0000 (0:00:00.298) 0:00:11.869 ********* 2025-05-28 17:24:43.458865 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:24:43.458875 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:24:43.458886 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:24:43.458896 | orchestrator | 2025-05-28 17:24:43.458907 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-05-28 17:24:43.458918 | orchestrator | Wednesday 28 May 2025 17:22:46 +0000 (0:00:00.378) 0:00:12.248 ********* 2025-05-28 17:24:43.458932 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:24:43.458951 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:24:43.458969 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:24:43.458987 | orchestrator | 2025-05-28 17:24:43.459003 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-05-28 17:24:43.459021 | orchestrator | Wednesday 28 May 2025 17:22:47 +0000 (0:00:00.427) 0:00:12.676 ********* 2025-05-28 17:24:43.459038 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:24:43.459054 | orchestrator | 2025-05-28 17:24:43.459071 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-05-28 17:24:43.459091 | orchestrator | Wednesday 28 May 2025 17:22:47 +0000 (0:00:00.139) 0:00:12.815 ********* 2025-05-28 17:24:43.459110 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:24:43.459128 | orchestrator | 2025-05-28 17:24:43.459144 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-05-28 17:24:43.459161 | orchestrator | Wednesday 28 May 2025 17:22:47 +0000 (0:00:00.220) 0:00:13.036 ********* 2025-05-28 17:24:43.459179 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:24:43.459197 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:24:43.459284 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:24:43.459305 | orchestrator | 2025-05-28 17:24:43.459321 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-05-28 17:24:43.459332 | orchestrator | Wednesday 28 May 2025 17:22:47 +0000 (0:00:00.280) 0:00:13.316 ********* 2025-05-28 17:24:43.459356 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:24:43.459366 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:24:43.459377 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:24:43.459388 | orchestrator | 2025-05-28 17:24:43.459398 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-05-28 17:24:43.459409 | orchestrator | Wednesday 28 May 2025 17:22:48 +0000 (0:00:00.300) 0:00:13.616 ********* 2025-05-28 17:24:43.459422 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:24:43.459442 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:24:43.459460 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:24:43.459477 | orchestrator | 2025-05-28 17:24:43.459495 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-05-28 17:24:43.459514 | orchestrator | Wednesday 28 May 2025 17:22:48 +0000 (0:00:00.460) 0:00:14.077 ********* 2025-05-28 17:24:43.459532 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:24:43.459550 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:24:43.459569 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:24:43.459587 | orchestrator | 2025-05-28 17:24:43.459606 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-05-28 17:24:43.459625 | orchestrator | Wednesday 28 May 2025 17:22:49 +0000 (0:00:00.312) 0:00:14.389 ********* 2025-05-28 17:24:43.459643 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:24:43.459654 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:24:43.459665 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:24:43.459676 | orchestrator | 2025-05-28 17:24:43.459687 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-05-28 17:24:43.459697 | orchestrator | Wednesday 28 May 2025 17:22:49 +0000 (0:00:00.283) 0:00:14.672 ********* 2025-05-28 17:24:43.459708 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:24:43.459728 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:24:43.459745 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:24:43.459763 | orchestrator | 2025-05-28 17:24:43.459781 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-05-28 17:24:43.459859 | orchestrator | Wednesday 28 May 2025 17:22:49 +0000 (0:00:00.299) 0:00:14.972 ********* 2025-05-28 17:24:43.459880 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:24:43.459896 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:24:43.459911 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:24:43.459927 | orchestrator | 2025-05-28 17:24:43.459942 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-05-28 17:24:43.459958 | orchestrator | Wednesday 28 May 2025 17:22:50 +0000 (0:00:00.478) 0:00:15.450 ********* 2025-05-28 17:24:43.459977 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b27f73ed--a290--5ab5--82ba--70ebe910dd97-osd--block--b27f73ed--a290--5ab5--82ba--70ebe910dd97', 'dm-uuid-LVM-9KLnSV2FMdu5smNS3y5wyX3w7ayXNG7y8kFFVylj4M6XQm1D32z3UL9kTpdBpt24'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-28 17:24:43.459996 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fbdc558b--af0f--50ef--b610--4a3c4fb87cac-osd--block--fbdc558b--af0f--50ef--b610--4a3c4fb87cac', 'dm-uuid-LVM-3OcUXFJdZOjxX4MhVM6COoKVtLABKf07UF6CWmNn0ylHpl2JtM11yyjevZteTWOE'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-28 17:24:43.460014 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 17:24:43.460118 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 17:24:43.460138 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 17:24:43.460155 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 17:24:43.460173 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 17:24:43.460311 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 17:24:43.460333 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 17:24:43.460351 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 17:24:43.460372 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e07e7c9-91b0-4ca1-b00e-661089b639c5', 'scsi-SQEMU_QEMU_HARDDISK_3e07e7c9-91b0-4ca1-b00e-661089b639c5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e07e7c9-91b0-4ca1-b00e-661089b639c5-part1', 'scsi-SQEMU_QEMU_HARDDISK_3e07e7c9-91b0-4ca1-b00e-661089b639c5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e07e7c9-91b0-4ca1-b00e-661089b639c5-part14', 'scsi-SQEMU_QEMU_HARDDISK_3e07e7c9-91b0-4ca1-b00e-661089b639c5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e07e7c9-91b0-4ca1-b00e-661089b639c5-part15', 'scsi-SQEMU_QEMU_HARDDISK_3e07e7c9-91b0-4ca1-b00e-661089b639c5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e07e7c9-91b0-4ca1-b00e-661089b639c5-part16', 'scsi-SQEMU_QEMU_HARDDISK_3e07e7c9-91b0-4ca1-b00e-661089b639c5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-28 17:24:43.460406 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--b27f73ed--a290--5ab5--82ba--70ebe910dd97-osd--block--b27f73ed--a290--5ab5--82ba--70ebe910dd97'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-d65QUk-DtJC-JGe9-CIIx-PJTJ-W9E2-iJBFyL', 'scsi-0QEMU_QEMU_HARDDISK_da6420c4-4562-42e6-8445-8de06d590092', 'scsi-SQEMU_QEMU_HARDDISK_da6420c4-4562-42e6-8445-8de06d590092'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-28 17:24:43.460479 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b5b3f734--7a3a--56eb--b9e1--00e08c7f7e25-osd--block--b5b3f734--7a3a--56eb--b9e1--00e08c7f7e25', 'dm-uuid-LVM-LBOmjHRZzCuxZPOQJodwcdTLf69Ofevmg8e2XHQ3Pwz2n2xPxhpILlxcVPgbAlKk'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-28 17:24:43.460500 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--fbdc558b--af0f--50ef--b610--4a3c4fb87cac-osd--block--fbdc558b--af0f--50ef--b610--4a3c4fb87cac'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Torr0x-o6IT-Uhyq-LPgW-VFfl-CEez-PDbgrh', 'scsi-0QEMU_QEMU_HARDDISK_66780fe2-f30a-4cd5-a925-045679329f08', 'scsi-SQEMU_QEMU_HARDDISK_66780fe2-f30a-4cd5-a925-045679329f08'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-28 17:24:43.460517 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7e811d1b--ccc9--571e--beba--983efbae239d-osd--block--7e811d1b--ccc9--571e--beba--983efbae239d', 'dm-uuid-LVM-CAITT3RP6TLMc9HmcMNx0JcxwXriugGpoki7VaPbKtuGl6xe2aNOrqHspFG1X3oT'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-28 17:24:43.460535 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_705788e5-cc1d-4d40-94fd-fb0e2f22a483', 'scsi-SQEMU_QEMU_HARDDISK_705788e5-cc1d-4d40-94fd-fb0e2f22a483'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-28 17:24:43.460547 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 17:24:43.460558 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-28-16-27-20-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-28 17:24:43.460568 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 17:24:43.460608 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 17:24:43.460621 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 17:24:43.460631 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 17:24:43.460641 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 17:24:43.460657 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 17:24:43.460666 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:24:43.460677 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 17:24:43.460798 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb048e8a-8419-4b97-a2c5-865582781a7c', 'scsi-SQEMU_QEMU_HARDDISK_eb048e8a-8419-4b97-a2c5-865582781a7c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb048e8a-8419-4b97-a2c5-865582781a7c-part1', 'scsi-SQEMU_QEMU_HARDDISK_eb048e8a-8419-4b97-a2c5-865582781a7c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb048e8a-8419-4b97-a2c5-865582781a7c-part14', 'scsi-SQEMU_QEMU_HARDDISK_eb048e8a-8419-4b97-a2c5-865582781a7c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb048e8a-8419-4b97-a2c5-865582781a7c-part15', 'scsi-SQEMU_QEMU_HARDDISK_eb048e8a-8419-4b97-a2c5-865582781a7c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb048e8a-8419-4b97-a2c5-865582781a7c-part16', 'scsi-SQEMU_QEMU_HARDDISK_eb048e8a-8419-4b97-a2c5-865582781a7c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-28 17:24:43.460824 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--b5b3f734--7a3a--56eb--b9e1--00e08c7f7e25-osd--block--b5b3f734--7a3a--56eb--b9e1--00e08c7f7e25'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-aH6NYF-XOTJ-BzO5-wlK5-Wg1X-YPyb-SmFGYl', 'scsi-0QEMU_QEMU_HARDDISK_0444fcd6-ace4-41be-a60f-d61a86741ad0', 'scsi-SQEMU_QEMU_HARDDISK_0444fcd6-ace4-41be-a60f-d61a86741ad0'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-28 17:24:43.460842 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--7e811d1b--ccc9--571e--beba--983efbae239d-osd--block--7e811d1b--ccc9--571e--beba--983efbae239d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-oa4YS1-Oof0-xLLq-Kbqf-lN5t-767L-fbWVLa', 'scsi-0QEMU_QEMU_HARDDISK_d5a98c17-e489-4dc0-a000-f021a8d49d4d', 'scsi-SQEMU_QEMU_HARDDISK_d5a98c17-e489-4dc0-a000-f021a8d49d4d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-28 17:24:43.460852 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c3ba669b-02ce-4ac9-8d34-f5b1bbc1f6b4', 'scsi-SQEMU_QEMU_HARDDISK_c3ba669b-02ce-4ac9-8d34-f5b1bbc1f6b4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-28 17:24:43.460863 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--91f15584--1a8a--582b--a00a--c533bea87f37-osd--block--91f15584--1a8a--582b--a00a--c533bea87f37', 'dm-uuid-LVM-SZ7fUzalikI3yYKAExVeTMfqLzlx29glVO0dFKrypnLKwBHEDds3DU1HwME1nrC4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-28 17:24:43.460873 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-28-16-27-18-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-28 17:24:43.460897 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d85522ca--9ab4--5810--aefe--18d74b0f7dbe-osd--block--d85522ca--9ab4--5810--aefe--18d74b0f7dbe', 'dm-uuid-LVM-AzC3Hw2lyZQrpdA8BrMkmXdWsef6cE9NyBcJfcYWpPONb2VHWS4VHXN4HV8cED63'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-28 17:24:43.460908 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:24:43.460918 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 17:24:43.460929 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 17:24:43.460946 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 17:24:43.460956 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 17:24:43.460966 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 17:24:43.460976 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 17:24:43.460986 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 17:24:43.460996 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 17:24:43.461041 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_536d5e59-8868-442a-b439-21fdbfcfc02f', 'scsi-SQEMU_QEMU_HARDDISK_536d5e59-8868-442a-b439-21fdbfcfc02f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_536d5e59-8868-442a-b439-21fdbfcfc02f-part1', 'scsi-SQEMU_QEMU_HARDDISK_536d5e59-8868-442a-b439-21fdbfcfc02f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_536d5e59-8868-442a-b439-21fdbfcfc02f-part14', 'scsi-SQEMU_QEMU_HARDDISK_536d5e59-8868-442a-b439-21fdbfcfc02f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_536d5e59-8868-442a-b439-21fdbfcfc02f-part15', 'scsi-SQEMU_QEMU_HARDDISK_536d5e59-8868-442a-b439-21fdbfcfc02f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_536d5e59-8868-442a-b439-21fdbfcfc02f-part16', 'scsi-SQEMU_QEMU_HARDDISK_536d5e59-8868-442a-b439-21fdbfcfc02f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-28 17:24:43.461060 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--91f15584--1a8a--582b--a00a--c533bea87f37-osd--block--91f15584--1a8a--582b--a00a--c533bea87f37'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-SgwlIF-cvJP-49vP-C19Y-EBRD-SVc4-jUIiXe', 'scsi-0QEMU_QEMU_HARDDISK_1369a208-db5b-4ff3-8df7-c2f8ed8178e8', 'scsi-SQEMU_QEMU_HARDDISK_1369a208-db5b-4ff3-8df7-c2f8ed8178e8'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-28 17:24:43.461071 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--d85522ca--9ab4--5810--aefe--18d74b0f7dbe-osd--block--d85522ca--9ab4--5810--aefe--18d74b0f7dbe'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-vCAuSE-MMAw-D5wt-rZoX-iPtq-UgGK-kpJaQz', 'scsi-0QEMU_QEMU_HARDDISK_3045bd6c-b8ff-4958-af32-f9dea72800f3', 'scsi-SQEMU_QEMU_HARDDISK_3045bd6c-b8ff-4958-af32-f9dea72800f3'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-28 17:24:43.461081 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80beb2a7-6ee1-4917-8c3d-de783739f119', 'scsi-SQEMU_QEMU_HARDDISK_80beb2a7-6ee1-4917-8c3d-de783739f119'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-28 17:24:43.461101 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-28-16-27-14-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-28 17:24:43.461111 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:24:43.461121 | orchestrator | 2025-05-28 17:24:43.461131 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-05-28 17:24:43.461149 | orchestrator | Wednesday 28 May 2025 17:22:50 +0000 (0:00:00.561) 0:00:16.012 ********* 2025-05-28 17:24:43.461160 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b27f73ed--a290--5ab5--82ba--70ebe910dd97-osd--block--b27f73ed--a290--5ab5--82ba--70ebe910dd97', 'dm-uuid-LVM-9KLnSV2FMdu5smNS3y5wyX3w7ayXNG7y8kFFVylj4M6XQm1D32z3UL9kTpdBpt24'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:24:43.461195 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fbdc558b--af0f--50ef--b610--4a3c4fb87cac-osd--block--fbdc558b--af0f--50ef--b610--4a3c4fb87cac', 'dm-uuid-LVM-3OcUXFJdZOjxX4MhVM6COoKVtLABKf07UF6CWmNn0ylHpl2JtM11yyjevZteTWOE'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:24:43.461345 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:24:43.461374 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:24:43.461385 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:24:43.461416 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:24:43.461436 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:24:43.461447 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b5b3f734--7a3a--56eb--b9e1--00e08c7f7e25-osd--block--b5b3f734--7a3a--56eb--b9e1--00e08c7f7e25', 'dm-uuid-LVM-LBOmjHRZzCuxZPOQJodwcdTLf69Ofevmg8e2XHQ3Pwz2n2xPxhpILlxcVPgbAlKk'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:24:43.461456 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:24:43.461466 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7e811d1b--ccc9--571e--beba--983efbae239d-osd--block--7e811d1b--ccc9--571e--beba--983efbae239d', 'dm-uuid-LVM-CAITT3RP6TLMc9HmcMNx0JcxwXriugGpoki7VaPbKtuGl6xe2aNOrqHspFG1X3oT'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:24:43.461476 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:24:43.461498 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:24:43.461515 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:24:43.461525 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:24:43.461536 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e07e7c9-91b0-4ca1-b00e-661089b639c5', 'scsi-SQEMU_QEMU_HARDDISK_3e07e7c9-91b0-4ca1-b00e-661089b639c5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e07e7c9-91b0-4ca1-b00e-661089b639c5-part1', 'scsi-SQEMU_QEMU_HARDDISK_3e07e7c9-91b0-4ca1-b00e-661089b639c5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e07e7c9-91b0-4ca1-b00e-661089b639c5-part14', 'scsi-SQEMU_QEMU_HARDDISK_3e07e7c9-91b0-4ca1-b00e-661089b639c5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e07e7c9-91b0-4ca1-b00e-661089b639c5-part15', 'scsi-SQEMU_QEMU_HARDDISK_3e07e7c9-91b0-4ca1-b00e-661089b639c5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e07e7c9-91b0-4ca1-b00e-661089b639c5-part16', 'scsi-SQEMU_QEMU_HARDDISK_3e07e7c9-91b0-4ca1-b00e-661089b639c5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:24:43.461559 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:24:43.461604 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--b27f73ed--a290--5ab5--82ba--70ebe910dd97-osd--block--b27f73ed--a290--5ab5--82ba--70ebe910dd97'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-d65QUk-DtJC-JGe9-CIIx-PJTJ-W9E2-iJBFyL', 'scsi-0QEMU_QEMU_HARDDISK_da6420c4-4562-42e6-8445-8de06d590092', 'scsi-SQEMU_QEMU_HARDDISK_da6420c4-4562-42e6-8445-8de06d590092'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:24:43.461615 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:24:43.461625 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--fbdc558b--af0f--50ef--b610--4a3c4fb87cac-osd--block--fbdc558b--af0f--50ef--b610--4a3c4fb87cac'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Torr0x-o6IT-Uhyq-LPgW-VFfl-CEez-PDbgrh', 'scsi-0QEMU_QEMU_HARDDISK_66780fe2-f30a-4cd5-a925-045679329f08', 'scsi-SQEMU_QEMU_HARDDISK_66780fe2-f30a-4cd5-a925-045679329f08'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:24:43.461635 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:24:43.461668 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_705788e5-cc1d-4d40-94fd-fb0e2f22a483', 'scsi-SQEMU_QEMU_HARDDISK_705788e5-cc1d-4d40-94fd-fb0e2f22a483'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:24:43.461686 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:24:43.461697 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-28-16-27-20-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:24:43.461707 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:24:43.461717 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:24:43.461727 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:24:43.461752 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb048e8a-8419-4b97-a2c5-865582781a7c', 'scsi-SQEMU_QEMU_HARDDISK_eb048e8a-8419-4b97-a2c5-865582781a7c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb048e8a-8419-4b97-a2c5-865582781a7c-part1', 'scsi-SQEMU_QEMU_HARDDISK_eb048e8a-8419-4b97-a2c5-865582781a7c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb048e8a-8419-4b97-a2c5-865582781a7c-part14', 'scsi-SQEMU_QEMU_HARDDISK_eb048e8a-8419-4b97-a2c5-865582781a7c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb048e8a-8419-4b97-a2c5-865582781a7c-part15', 'scsi-SQEMU_QEMU_HARDDISK_eb048e8a-8419-4b97-a2c5-865582781a7c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb048e8a-8419-4b97-a2c5-865582781a7c-part16', 'scsi-SQEMU_QEMU_HARDDISK_eb048e8a-8419-4b97-a2c5-865582781a7c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:24:43.461772 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--b5b3f734--7a3a--56eb--b9e1--00e08c7f7e25-osd--block--b5b3f734--7a3a--56eb--b9e1--00e08c7f7e25'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-aH6NYF-XOTJ-BzO5-wlK5-Wg1X-YPyb-SmFGYl', 'scsi-0QEMU_QEMU_HARDDISK_0444fcd6-ace4-41be-a60f-d61a86741ad0', 'scsi-SQEMU_QEMU_HARDDISK_0444fcd6-ace4-41be-a60f-d61a86741ad0'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:24:43.461783 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--7e811d1b--ccc9--571e--beba--983efbae239d-osd--block--7e811d1b--ccc9--571e--beba--983efbae239d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-oa4YS1-Oof0-xLLq-Kbqf-lN5t-767L-fbWVLa', 'scsi-0QEMU_QEMU_HARDDISK_d5a98c17-e489-4dc0-a000-f021a8d49d4d', 'scsi-SQEMU_QEMU_HARDDISK_d5a98c17-e489-4dc0-a000-f021a8d49d4d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:24:43.461799 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--91f15584--1a8a--582b--a00a--c533bea87f37-osd--block--91f15584--1a8a--582b--a00a--c533bea87f37', 'dm-uuid-LVM-SZ7fUzalikI3yYKAExVeTMfqLzlx29glVO0dFKrypnLKwBHEDds3DU1HwME1nrC4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:24:43.461837 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c3ba669b-02ce-4ac9-8d34-f5b1bbc1f6b4', 'scsi-SQEMU_QEMU_HARDDISK_c3ba669b-02ce-4ac9-8d34-f5b1bbc1f6b4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:24:43.461847 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d85522ca--9ab4--5810--aefe--18d74b0f7dbe-osd--block--d85522ca--9ab4--5810--aefe--18d74b0f7dbe', 'dm-uuid-LVM-AzC3Hw2lyZQrpdA8BrMkmXdWsef6cE9NyBcJfcYWpPONb2VHWS4VHXN4HV8cED63'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:24:43.461855 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-28-16-27-18-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:24:43.461863 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:24:43.461872 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:24:43.461880 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:24:43.461888 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:24:43.461910 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:24:43.461919 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:24:43.461927 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:24:43.461935 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:24:43.461943 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:24:43.461961 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_536d5e59-8868-442a-b439-21fdbfcfc02f', 'scsi-SQEMU_QEMU_HARDDISK_536d5e59-8868-442a-b439-21fdbfcfc02f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_536d5e59-8868-442a-b439-21fdbfcfc02f-part1', 'scsi-SQEMU_QEMU_HARDDISK_536d5e59-8868-442a-b439-21fdbfcfc02f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_536d5e59-8868-442a-b439-21fdbfcfc02f-part14', 'scsi-SQEMU_QEMU_HARDDISK_536d5e59-8868-442a-b439-21fdbfcfc02f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_536d5e59-8868-442a-b439-21fdbfcfc02f-part15', 'scsi-SQEMU_QEMU_HARDDISK_536d5e59-8868-442a-b439-21fdbfcfc02f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_536d5e59-8868-442a-b439-21fdbfcfc02f-part16', 'scsi-SQEMU_QEMU_HARDDISK_536d5e59-8868-442a-b439-21fdbfcfc02f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:24:43.461975 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--91f15584--1a8a--582b--a00a--c533bea87f37-osd--block--91f15584--1a8a--582b--a00a--c533bea87f37'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-SgwlIF-cvJP-49vP-C19Y-EBRD-SVc4-jUIiXe', 'scsi-0QEMU_QEMU_HARDDISK_1369a208-db5b-4ff3-8df7-c2f8ed8178e8', 'scsi-SQEMU_QEMU_HARDDISK_1369a208-db5b-4ff3-8df7-c2f8ed8178e8'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:24:43.461984 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--d85522ca--9ab4--5810--aefe--18d74b0f7dbe-osd--block--d85522ca--9ab4--5810--aefe--18d74b0f7dbe'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-vCAuSE-MMAw-D5wt-rZoX-iPtq-UgGK-kpJaQz', 'scsi-0QEMU_QEMU_HARDDISK_3045bd6c-b8ff-4958-af32-f9dea72800f3', 'scsi-SQEMU_QEMU_HARDDISK_3045bd6c-b8ff-4958-af32-f9dea72800f3'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:24:43.461992 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80beb2a7-6ee1-4917-8c3d-de783739f119', 'scsi-SQEMU_QEMU_HARDDISK_80beb2a7-6ee1-4917-8c3d-de783739f119'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:24:43.462061 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-28-16-27-14-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-28 17:24:43.462073 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:24:43.462081 | orchestrator | 2025-05-28 17:24:43.462089 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-05-28 17:24:43.462098 | orchestrator | Wednesday 28 May 2025 17:22:51 +0000 (0:00:00.610) 0:00:16.622 ********* 2025-05-28 17:24:43.462106 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:24:43.462114 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:24:43.462122 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:24:43.462130 | orchestrator | 2025-05-28 17:24:43.462138 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-05-28 17:24:43.462146 | orchestrator | Wednesday 28 May 2025 17:22:51 +0000 (0:00:00.668) 0:00:17.291 ********* 2025-05-28 17:24:43.462154 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:24:43.462161 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:24:43.462169 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:24:43.462177 | orchestrator | 2025-05-28 17:24:43.462185 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-05-28 17:24:43.462193 | orchestrator | Wednesday 28 May 2025 17:22:52 +0000 (0:00:00.479) 0:00:17.770 ********* 2025-05-28 17:24:43.462201 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:24:43.462229 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:24:43.462239 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:24:43.462246 | orchestrator | 2025-05-28 17:24:43.462254 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-05-28 17:24:43.462262 | orchestrator | Wednesday 28 May 2025 17:22:52 +0000 (0:00:00.617) 0:00:18.388 ********* 2025-05-28 17:24:43.462270 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:24:43.462278 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:24:43.462286 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:24:43.462293 | orchestrator | 2025-05-28 17:24:43.462301 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-05-28 17:24:43.462309 | orchestrator | Wednesday 28 May 2025 17:22:53 +0000 (0:00:00.263) 0:00:18.651 ********* 2025-05-28 17:24:43.462317 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:24:43.462325 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:24:43.462332 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:24:43.462340 | orchestrator | 2025-05-28 17:24:43.462348 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-05-28 17:24:43.462355 | orchestrator | Wednesday 28 May 2025 17:22:53 +0000 (0:00:00.379) 0:00:19.031 ********* 2025-05-28 17:24:43.462363 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:24:43.462371 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:24:43.462385 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:24:43.462393 | orchestrator | 2025-05-28 17:24:43.462400 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-05-28 17:24:43.462408 | orchestrator | Wednesday 28 May 2025 17:22:54 +0000 (0:00:00.455) 0:00:19.487 ********* 2025-05-28 17:24:43.462416 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-05-28 17:24:43.462424 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-05-28 17:24:43.462432 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-05-28 17:24:43.462439 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-05-28 17:24:43.462447 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-05-28 17:24:43.462455 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-05-28 17:24:43.462462 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-05-28 17:24:43.462470 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-05-28 17:24:43.462478 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-05-28 17:24:43.462485 | orchestrator | 2025-05-28 17:24:43.462493 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-05-28 17:24:43.462501 | orchestrator | Wednesday 28 May 2025 17:22:54 +0000 (0:00:00.818) 0:00:20.306 ********* 2025-05-28 17:24:43.462509 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-28 17:24:43.462516 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-28 17:24:43.462524 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-28 17:24:43.462532 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:24:43.462539 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-28 17:24:43.462547 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-28 17:24:43.462555 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-28 17:24:43.462562 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:24:43.462570 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-28 17:24:43.462577 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-28 17:24:43.462585 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-28 17:24:43.462593 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:24:43.462600 | orchestrator | 2025-05-28 17:24:43.462608 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-05-28 17:24:43.462616 | orchestrator | Wednesday 28 May 2025 17:22:55 +0000 (0:00:00.315) 0:00:20.621 ********* 2025-05-28 17:24:43.462624 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 17:24:43.462632 | orchestrator | 2025-05-28 17:24:43.462640 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-28 17:24:43.462653 | orchestrator | Wednesday 28 May 2025 17:22:55 +0000 (0:00:00.648) 0:00:21.270 ********* 2025-05-28 17:24:43.462675 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:24:43.462684 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:24:43.462691 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:24:43.462699 | orchestrator | 2025-05-28 17:24:43.462712 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-28 17:24:43.462720 | orchestrator | Wednesday 28 May 2025 17:22:56 +0000 (0:00:00.331) 0:00:21.602 ********* 2025-05-28 17:24:43.462728 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:24:43.462736 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:24:43.462744 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:24:43.462752 | orchestrator | 2025-05-28 17:24:43.462759 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-28 17:24:43.462767 | orchestrator | Wednesday 28 May 2025 17:22:56 +0000 (0:00:00.300) 0:00:21.902 ********* 2025-05-28 17:24:43.462775 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:24:43.462783 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:24:43.462799 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:24:43.462807 | orchestrator | 2025-05-28 17:24:43.462815 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-05-28 17:24:43.462822 | orchestrator | Wednesday 28 May 2025 17:22:56 +0000 (0:00:00.304) 0:00:22.206 ********* 2025-05-28 17:24:43.462830 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:24:43.462838 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:24:43.462846 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:24:43.462853 | orchestrator | 2025-05-28 17:24:43.462861 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-05-28 17:24:43.462869 | orchestrator | Wednesday 28 May 2025 17:22:57 +0000 (0:00:00.547) 0:00:22.754 ********* 2025-05-28 17:24:43.462877 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-28 17:24:43.462884 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-28 17:24:43.462892 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-28 17:24:43.462900 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:24:43.462908 | orchestrator | 2025-05-28 17:24:43.462915 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-28 17:24:43.462923 | orchestrator | Wednesday 28 May 2025 17:22:57 +0000 (0:00:00.354) 0:00:23.108 ********* 2025-05-28 17:24:43.462931 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-28 17:24:43.462939 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-28 17:24:43.462947 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-28 17:24:43.462954 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:24:43.462962 | orchestrator | 2025-05-28 17:24:43.462970 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-28 17:24:43.462978 | orchestrator | Wednesday 28 May 2025 17:22:58 +0000 (0:00:00.331) 0:00:23.440 ********* 2025-05-28 17:24:43.462986 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-28 17:24:43.462994 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-28 17:24:43.463001 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-28 17:24:43.463009 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:24:43.463017 | orchestrator | 2025-05-28 17:24:43.463025 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-05-28 17:24:43.463033 | orchestrator | Wednesday 28 May 2025 17:22:58 +0000 (0:00:00.375) 0:00:23.815 ********* 2025-05-28 17:24:43.463040 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:24:43.463048 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:24:43.463056 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:24:43.463064 | orchestrator | 2025-05-28 17:24:43.463072 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-05-28 17:24:43.463079 | orchestrator | Wednesday 28 May 2025 17:22:58 +0000 (0:00:00.312) 0:00:24.128 ********* 2025-05-28 17:24:43.463087 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-28 17:24:43.463095 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-05-28 17:24:43.463103 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-05-28 17:24:43.463111 | orchestrator | 2025-05-28 17:24:43.463118 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-05-28 17:24:43.463126 | orchestrator | Wednesday 28 May 2025 17:22:59 +0000 (0:00:00.504) 0:00:24.633 ********* 2025-05-28 17:24:43.463134 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-28 17:24:43.463142 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-28 17:24:43.463150 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-28 17:24:43.463157 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-05-28 17:24:43.463165 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-28 17:24:43.463173 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-28 17:24:43.463186 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-28 17:24:43.463194 | orchestrator | 2025-05-28 17:24:43.463201 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-05-28 17:24:43.463252 | orchestrator | Wednesday 28 May 2025 17:23:00 +0000 (0:00:01.002) 0:00:25.636 ********* 2025-05-28 17:24:43.463262 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-28 17:24:43.463269 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-28 17:24:43.463277 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-28 17:24:43.463285 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-05-28 17:24:43.463293 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-28 17:24:43.463305 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-28 17:24:43.463313 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-28 17:24:43.463321 | orchestrator | 2025-05-28 17:24:43.463334 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-05-28 17:24:43.463342 | orchestrator | Wednesday 28 May 2025 17:23:02 +0000 (0:00:01.870) 0:00:27.507 ********* 2025-05-28 17:24:43.463350 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:24:43.463357 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:24:43.463365 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-05-28 17:24:43.463373 | orchestrator | 2025-05-28 17:24:43.463381 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-05-28 17:24:43.463389 | orchestrator | Wednesday 28 May 2025 17:23:02 +0000 (0:00:00.361) 0:00:27.868 ********* 2025-05-28 17:24:43.463397 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-28 17:24:43.463406 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-28 17:24:43.463414 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-28 17:24:43.463422 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-28 17:24:43.463430 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-28 17:24:43.463438 | orchestrator | 2025-05-28 17:24:43.463446 | orchestrator | TASK [generate keys] *********************************************************** 2025-05-28 17:24:43.463454 | orchestrator | Wednesday 28 May 2025 17:23:48 +0000 (0:00:46.012) 0:01:13.880 ********* 2025-05-28 17:24:43.463462 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-28 17:24:43.463469 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-28 17:24:43.463477 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-28 17:24:43.463491 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-28 17:24:43.463499 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-28 17:24:43.463507 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-28 17:24:43.463515 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-05-28 17:24:43.463523 | orchestrator | 2025-05-28 17:24:43.463530 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-05-28 17:24:43.463538 | orchestrator | Wednesday 28 May 2025 17:24:12 +0000 (0:00:23.518) 0:01:37.399 ********* 2025-05-28 17:24:43.463546 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-28 17:24:43.463554 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-28 17:24:43.463561 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-28 17:24:43.463569 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-28 17:24:43.463577 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-28 17:24:43.463585 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-28 17:24:43.463593 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-28 17:24:43.463600 | orchestrator | 2025-05-28 17:24:43.463608 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-05-28 17:24:43.463616 | orchestrator | Wednesday 28 May 2025 17:24:24 +0000 (0:00:12.180) 0:01:49.579 ********* 2025-05-28 17:24:43.463624 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-28 17:24:43.463631 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-28 17:24:43.463639 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-28 17:24:43.463647 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-28 17:24:43.463677 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-28 17:24:43.463685 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-28 17:24:43.463698 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-28 17:24:43.463706 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-28 17:24:43.463714 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-28 17:24:43.463721 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-28 17:24:43.463729 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-28 17:24:43.463737 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-28 17:24:43.463745 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-28 17:24:43.463753 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-28 17:24:43.463760 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-28 17:24:43.463768 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-28 17:24:43.463776 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-28 17:24:43.463784 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-28 17:24:43.463791 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-05-28 17:24:43.463799 | orchestrator | 2025-05-28 17:24:43.463807 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 17:24:43.463815 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-05-28 17:24:43.463832 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-05-28 17:24:43.463840 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-05-28 17:24:43.463848 | orchestrator | 2025-05-28 17:24:43.463856 | orchestrator | 2025-05-28 17:24:43.463863 | orchestrator | 2025-05-28 17:24:43.463871 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 17:24:43.463879 | orchestrator | Wednesday 28 May 2025 17:24:41 +0000 (0:00:17.206) 0:02:06.785 ********* 2025-05-28 17:24:43.463887 | orchestrator | =============================================================================== 2025-05-28 17:24:43.463894 | orchestrator | create openstack pool(s) ----------------------------------------------- 46.01s 2025-05-28 17:24:43.463902 | orchestrator | generate keys ---------------------------------------------------------- 23.52s 2025-05-28 17:24:43.463910 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.21s 2025-05-28 17:24:43.463918 | orchestrator | get keys from monitors ------------------------------------------------- 12.18s 2025-05-28 17:24:43.463926 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.21s 2025-05-28 17:24:43.463934 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.87s 2025-05-28 17:24:43.463941 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.69s 2025-05-28 17:24:43.463949 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.00s 2025-05-28 17:24:43.463957 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.82s 2025-05-28 17:24:43.463965 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.78s 2025-05-28 17:24:43.463973 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.76s 2025-05-28 17:24:43.463980 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.67s 2025-05-28 17:24:43.463988 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.65s 2025-05-28 17:24:43.463996 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.65s 2025-05-28 17:24:43.464004 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.64s 2025-05-28 17:24:43.464011 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.62s 2025-05-28 17:24:43.464019 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.62s 2025-05-28 17:24:43.464027 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.61s 2025-05-28 17:24:43.464035 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.56s 2025-05-28 17:24:43.464043 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.55s 2025-05-28 17:24:43.464050 | orchestrator | 2025-05-28 17:24:43 | INFO  | Task 0c24686a-df04-423a-a0ed-bb55c8ec0861 is in state STARTED 2025-05-28 17:24:43.464058 | orchestrator | 2025-05-28 17:24:43 | INFO  | Task 0857faf7-466d-470d-8183-6e64a7d62bfe is in state STARTED 2025-05-28 17:24:43.464066 | orchestrator | 2025-05-28 17:24:43 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:24:46.520906 | orchestrator | 2025-05-28 17:24:46 | INFO  | Task 9d5dbdf6-3fee-4f13-9abc-0b267642dc71 is in state STARTED 2025-05-28 17:24:46.525509 | orchestrator | 2025-05-28 17:24:46 | INFO  | Task 0c24686a-df04-423a-a0ed-bb55c8ec0861 is in state STARTED 2025-05-28 17:24:46.527544 | orchestrator | 2025-05-28 17:24:46 | INFO  | Task 0857faf7-466d-470d-8183-6e64a7d62bfe is in state STARTED 2025-05-28 17:24:46.527571 | orchestrator | 2025-05-28 17:24:46 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:24:49.582989 | orchestrator | 2025-05-28 17:24:49 | INFO  | Task 9d5dbdf6-3fee-4f13-9abc-0b267642dc71 is in state STARTED 2025-05-28 17:24:49.583516 | orchestrator | 2025-05-28 17:24:49 | INFO  | Task 0c24686a-df04-423a-a0ed-bb55c8ec0861 is in state STARTED 2025-05-28 17:24:49.584688 | orchestrator | 2025-05-28 17:24:49 | INFO  | Task 0857faf7-466d-470d-8183-6e64a7d62bfe is in state STARTED 2025-05-28 17:24:49.584712 | orchestrator | 2025-05-28 17:24:49 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:24:52.642468 | orchestrator | 2025-05-28 17:24:52 | INFO  | Task 9d5dbdf6-3fee-4f13-9abc-0b267642dc71 is in state STARTED 2025-05-28 17:24:52.643926 | orchestrator | 2025-05-28 17:24:52 | INFO  | Task 0c24686a-df04-423a-a0ed-bb55c8ec0861 is in state STARTED 2025-05-28 17:24:52.644999 | orchestrator | 2025-05-28 17:24:52 | INFO  | Task 0857faf7-466d-470d-8183-6e64a7d62bfe is in state STARTED 2025-05-28 17:24:52.645023 | orchestrator | 2025-05-28 17:24:52 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:24:55.702457 | orchestrator | 2025-05-28 17:24:55 | INFO  | Task 9d5dbdf6-3fee-4f13-9abc-0b267642dc71 is in state STARTED 2025-05-28 17:24:55.704000 | orchestrator | 2025-05-28 17:24:55 | INFO  | Task 0c24686a-df04-423a-a0ed-bb55c8ec0861 is in state STARTED 2025-05-28 17:24:55.705908 | orchestrator | 2025-05-28 17:24:55 | INFO  | Task 0857faf7-466d-470d-8183-6e64a7d62bfe is in state STARTED 2025-05-28 17:24:55.706603 | orchestrator | 2025-05-28 17:24:55 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:24:58.762077 | orchestrator | 2025-05-28 17:24:58 | INFO  | Task 9d5dbdf6-3fee-4f13-9abc-0b267642dc71 is in state STARTED 2025-05-28 17:24:58.763646 | orchestrator | 2025-05-28 17:24:58 | INFO  | Task 0c24686a-df04-423a-a0ed-bb55c8ec0861 is in state STARTED 2025-05-28 17:24:58.766940 | orchestrator | 2025-05-28 17:24:58 | INFO  | Task 0857faf7-466d-470d-8183-6e64a7d62bfe is in state STARTED 2025-05-28 17:24:58.767432 | orchestrator | 2025-05-28 17:24:58 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:25:01.820929 | orchestrator | 2025-05-28 17:25:01 | INFO  | Task 9d5dbdf6-3fee-4f13-9abc-0b267642dc71 is in state STARTED 2025-05-28 17:25:01.822156 | orchestrator | 2025-05-28 17:25:01 | INFO  | Task 0c24686a-df04-423a-a0ed-bb55c8ec0861 is in state STARTED 2025-05-28 17:25:01.825019 | orchestrator | 2025-05-28 17:25:01 | INFO  | Task 0857faf7-466d-470d-8183-6e64a7d62bfe is in state STARTED 2025-05-28 17:25:01.825136 | orchestrator | 2025-05-28 17:25:01 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:25:04.882886 | orchestrator | 2025-05-28 17:25:04 | INFO  | Task 9d5dbdf6-3fee-4f13-9abc-0b267642dc71 is in state STARTED 2025-05-28 17:25:04.884676 | orchestrator | 2025-05-28 17:25:04 | INFO  | Task 0c24686a-df04-423a-a0ed-bb55c8ec0861 is in state STARTED 2025-05-28 17:25:04.886000 | orchestrator | 2025-05-28 17:25:04 | INFO  | Task 0857faf7-466d-470d-8183-6e64a7d62bfe is in state STARTED 2025-05-28 17:25:04.886258 | orchestrator | 2025-05-28 17:25:04 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:25:07.938788 | orchestrator | 2025-05-28 17:25:07 | INFO  | Task 9d5dbdf6-3fee-4f13-9abc-0b267642dc71 is in state STARTED 2025-05-28 17:25:07.939692 | orchestrator | 2025-05-28 17:25:07 | INFO  | Task 0c24686a-df04-423a-a0ed-bb55c8ec0861 is in state STARTED 2025-05-28 17:25:07.941595 | orchestrator | 2025-05-28 17:25:07 | INFO  | Task 0857faf7-466d-470d-8183-6e64a7d62bfe is in state STARTED 2025-05-28 17:25:07.941864 | orchestrator | 2025-05-28 17:25:07 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:25:11.005559 | orchestrator | 2025-05-28 17:25:11 | INFO  | Task 9d5dbdf6-3fee-4f13-9abc-0b267642dc71 is in state STARTED 2025-05-28 17:25:11.006082 | orchestrator | 2025-05-28 17:25:11 | INFO  | Task 0c24686a-df04-423a-a0ed-bb55c8ec0861 is in state SUCCESS 2025-05-28 17:25:11.006154 | orchestrator | 2025-05-28 17:25:11 | INFO  | Task 0857faf7-466d-470d-8183-6e64a7d62bfe is in state STARTED 2025-05-28 17:25:11.006175 | orchestrator | 2025-05-28 17:25:11 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:25:14.063435 | orchestrator | 2025-05-28 17:25:14 | INFO  | Task 9d5dbdf6-3fee-4f13-9abc-0b267642dc71 is in state STARTED 2025-05-28 17:25:14.064749 | orchestrator | 2025-05-28 17:25:14 | INFO  | Task 5ad4789a-1168-4780-a06c-a2e94241c756 is in state STARTED 2025-05-28 17:25:14.066727 | orchestrator | 2025-05-28 17:25:14 | INFO  | Task 0857faf7-466d-470d-8183-6e64a7d62bfe is in state STARTED 2025-05-28 17:25:14.066764 | orchestrator | 2025-05-28 17:25:14 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:25:17.119228 | orchestrator | 2025-05-28 17:25:17 | INFO  | Task 9d5dbdf6-3fee-4f13-9abc-0b267642dc71 is in state STARTED 2025-05-28 17:25:17.120371 | orchestrator | 2025-05-28 17:25:17 | INFO  | Task 5ad4789a-1168-4780-a06c-a2e94241c756 is in state STARTED 2025-05-28 17:25:17.121849 | orchestrator | 2025-05-28 17:25:17 | INFO  | Task 0857faf7-466d-470d-8183-6e64a7d62bfe is in state STARTED 2025-05-28 17:25:17.121884 | orchestrator | 2025-05-28 17:25:17 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:25:20.167334 | orchestrator | 2025-05-28 17:25:20 | INFO  | Task 9d5dbdf6-3fee-4f13-9abc-0b267642dc71 is in state STARTED 2025-05-28 17:25:20.169473 | orchestrator | 2025-05-28 17:25:20 | INFO  | Task 5ad4789a-1168-4780-a06c-a2e94241c756 is in state STARTED 2025-05-28 17:25:20.171720 | orchestrator | 2025-05-28 17:25:20 | INFO  | Task 0857faf7-466d-470d-8183-6e64a7d62bfe is in state STARTED 2025-05-28 17:25:20.171750 | orchestrator | 2025-05-28 17:25:20 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:25:23.235519 | orchestrator | 2025-05-28 17:25:23 | INFO  | Task 9d5dbdf6-3fee-4f13-9abc-0b267642dc71 is in state SUCCESS 2025-05-28 17:25:23.237919 | orchestrator | 2025-05-28 17:25:23.237957 | orchestrator | 2025-05-28 17:25:23.237972 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-05-28 17:25:23.237985 | orchestrator | 2025-05-28 17:25:23.237997 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-05-28 17:25:23.238009 | orchestrator | Wednesday 28 May 2025 17:24:45 +0000 (0:00:00.149) 0:00:00.149 ********* 2025-05-28 17:25:23.238083 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-05-28 17:25:23.238097 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-05-28 17:25:23.238109 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-05-28 17:25:23.238120 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-05-28 17:25:23.238132 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-05-28 17:25:23.238143 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-05-28 17:25:23.238155 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-05-28 17:25:23.238166 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-05-28 17:25:23.238177 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-05-28 17:25:23.238189 | orchestrator | 2025-05-28 17:25:23.238223 | orchestrator | TASK [Create share directory] ************************************************** 2025-05-28 17:25:23.238264 | orchestrator | Wednesday 28 May 2025 17:24:49 +0000 (0:00:04.047) 0:00:04.197 ********* 2025-05-28 17:25:23.238276 | orchestrator | changed: [testbed-manager -> localhost] 2025-05-28 17:25:23.238288 | orchestrator | 2025-05-28 17:25:23.238298 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-05-28 17:25:23.238309 | orchestrator | Wednesday 28 May 2025 17:24:50 +0000 (0:00:01.004) 0:00:05.201 ********* 2025-05-28 17:25:23.238320 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-05-28 17:25:23.238331 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-28 17:25:23.238342 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-28 17:25:23.238353 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-05-28 17:25:23.238634 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-28 17:25:23.238647 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-05-28 17:25:23.238658 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-05-28 17:25:23.238669 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-05-28 17:25:23.238680 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-05-28 17:25:23.238691 | orchestrator | 2025-05-28 17:25:23.238702 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-05-28 17:25:23.238713 | orchestrator | Wednesday 28 May 2025 17:25:03 +0000 (0:00:12.801) 0:00:18.003 ********* 2025-05-28 17:25:23.238724 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-05-28 17:25:23.238735 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-05-28 17:25:23.238746 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-05-28 17:25:23.238827 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-05-28 17:25:23.238840 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-05-28 17:25:23.238851 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-05-28 17:25:23.238862 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-05-28 17:25:23.238872 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-05-28 17:25:23.238883 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-05-28 17:25:23.238895 | orchestrator | 2025-05-28 17:25:23.238905 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 17:25:23.238917 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 17:25:23.238930 | orchestrator | 2025-05-28 17:25:23.238941 | orchestrator | 2025-05-28 17:25:23.238952 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 17:25:23.238962 | orchestrator | Wednesday 28 May 2025 17:25:10 +0000 (0:00:06.574) 0:00:24.578 ********* 2025-05-28 17:25:23.238973 | orchestrator | =============================================================================== 2025-05-28 17:25:23.238984 | orchestrator | Write ceph keys to the share directory --------------------------------- 12.80s 2025-05-28 17:25:23.238995 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.58s 2025-05-28 17:25:23.239006 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.05s 2025-05-28 17:25:23.239016 | orchestrator | Create share directory -------------------------------------------------- 1.00s 2025-05-28 17:25:23.239027 | orchestrator | 2025-05-28 17:25:23.239038 | orchestrator | 2025-05-28 17:25:23.239049 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-28 17:25:23.239060 | orchestrator | 2025-05-28 17:25:23.239082 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-28 17:25:23.239217 | orchestrator | Wednesday 28 May 2025 17:23:35 +0000 (0:00:00.250) 0:00:00.250 ********* 2025-05-28 17:25:23.239235 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:25:23.239246 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:25:23.239257 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:25:23.239268 | orchestrator | 2025-05-28 17:25:23.239279 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-28 17:25:23.239289 | orchestrator | Wednesday 28 May 2025 17:23:35 +0000 (0:00:00.283) 0:00:00.534 ********* 2025-05-28 17:25:23.239300 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-05-28 17:25:23.239311 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-05-28 17:25:23.239322 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-05-28 17:25:23.239332 | orchestrator | 2025-05-28 17:25:23.239343 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-05-28 17:25:23.239354 | orchestrator | 2025-05-28 17:25:23.239365 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-28 17:25:23.239375 | orchestrator | Wednesday 28 May 2025 17:23:36 +0000 (0:00:00.411) 0:00:00.945 ********* 2025-05-28 17:25:23.239387 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:25:23.239398 | orchestrator | 2025-05-28 17:25:23.239408 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-05-28 17:25:23.239419 | orchestrator | Wednesday 28 May 2025 17:23:36 +0000 (0:00:00.485) 0:00:01.430 ********* 2025-05-28 17:25:23.239445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-28 17:25:23.239477 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-28 17:25:23.239506 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-28 17:25:23.239526 | orchestrator | 2025-05-28 17:25:23.239538 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-05-28 17:25:23.239549 | orchestrator | Wednesday 28 May 2025 17:23:37 +0000 (0:00:01.070) 0:00:02.500 ********* 2025-05-28 17:25:23.239559 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:25:23.239570 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:25:23.239581 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:25:23.239592 | orchestrator | 2025-05-28 17:25:23.239602 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-28 17:25:23.239613 | orchestrator | Wednesday 28 May 2025 17:23:38 +0000 (0:00:00.418) 0:00:02.918 ********* 2025-05-28 17:25:23.239624 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-05-28 17:25:23.239635 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-05-28 17:25:23.239651 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-05-28 17:25:23.239662 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-05-28 17:25:23.239673 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-05-28 17:25:23.239684 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-05-28 17:25:23.239695 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-05-28 17:25:23.239705 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-05-28 17:25:23.239716 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-05-28 17:25:23.239727 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-05-28 17:25:23.239738 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-05-28 17:25:23.239748 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-05-28 17:25:23.239759 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-05-28 17:25:23.239770 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-05-28 17:25:23.239781 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-05-28 17:25:23.239792 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-05-28 17:25:23.239803 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-05-28 17:25:23.239816 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-05-28 17:25:23.239828 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-05-28 17:25:23.239840 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-05-28 17:25:23.239853 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-05-28 17:25:23.239865 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-05-28 17:25:23.239877 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-05-28 17:25:23.239890 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-05-28 17:25:23.239902 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-05-28 17:25:23.239917 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-05-28 17:25:23.239930 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-05-28 17:25:23.239942 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-05-28 17:25:23.239962 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-05-28 17:25:23.239982 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-05-28 17:25:23.240003 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-05-28 17:25:23.240017 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-05-28 17:25:23.240029 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-05-28 17:25:23.240042 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-05-28 17:25:23.240054 | orchestrator | 2025-05-28 17:25:23.240067 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-28 17:25:23.240080 | orchestrator | Wednesday 28 May 2025 17:23:39 +0000 (0:00:00.706) 0:00:03.625 ********* 2025-05-28 17:25:23.240092 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:25:23.240104 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:25:23.240116 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:25:23.240129 | orchestrator | 2025-05-28 17:25:23.240141 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-28 17:25:23.240154 | orchestrator | Wednesday 28 May 2025 17:23:39 +0000 (0:00:00.292) 0:00:03.917 ********* 2025-05-28 17:25:23.240165 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:25:23.240176 | orchestrator | 2025-05-28 17:25:23.240187 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-28 17:25:23.240233 | orchestrator | Wednesday 28 May 2025 17:23:39 +0000 (0:00:00.118) 0:00:04.036 ********* 2025-05-28 17:25:23.240245 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:25:23.240256 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:25:23.240267 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:25:23.240277 | orchestrator | 2025-05-28 17:25:23.240288 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-28 17:25:23.240299 | orchestrator | Wednesday 28 May 2025 17:23:39 +0000 (0:00:00.454) 0:00:04.490 ********* 2025-05-28 17:25:23.240310 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:25:23.240320 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:25:23.240331 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:25:23.240342 | orchestrator | 2025-05-28 17:25:23.240353 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-28 17:25:23.240363 | orchestrator | Wednesday 28 May 2025 17:23:40 +0000 (0:00:00.298) 0:00:04.788 ********* 2025-05-28 17:25:23.240374 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:25:23.240385 | orchestrator | 2025-05-28 17:25:23.240395 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-28 17:25:23.240406 | orchestrator | Wednesday 28 May 2025 17:23:40 +0000 (0:00:00.118) 0:00:04.907 ********* 2025-05-28 17:25:23.240417 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:25:23.240428 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:25:23.240438 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:25:23.240449 | orchestrator | 2025-05-28 17:25:23.240459 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-28 17:25:23.240471 | orchestrator | Wednesday 28 May 2025 17:23:40 +0000 (0:00:00.283) 0:00:05.191 ********* 2025-05-28 17:25:23.240481 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:25:23.240493 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:25:23.240532 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:25:23.240544 | orchestrator | 2025-05-28 17:25:23.240555 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-28 17:25:23.240565 | orchestrator | Wednesday 28 May 2025 17:23:40 +0000 (0:00:00.288) 0:00:05.480 ********* 2025-05-28 17:25:23.240576 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:25:23.240587 | orchestrator | 2025-05-28 17:25:23.240598 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-28 17:25:23.240609 | orchestrator | Wednesday 28 May 2025 17:23:41 +0000 (0:00:00.300) 0:00:05.780 ********* 2025-05-28 17:25:23.240619 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:25:23.240630 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:25:23.240641 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:25:23.240652 | orchestrator | 2025-05-28 17:25:23.240662 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-28 17:25:23.240673 | orchestrator | Wednesday 28 May 2025 17:23:41 +0000 (0:00:00.286) 0:00:06.067 ********* 2025-05-28 17:25:23.240684 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:25:23.240695 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:25:23.240706 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:25:23.240717 | orchestrator | 2025-05-28 17:25:23.240727 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-28 17:25:23.240738 | orchestrator | Wednesday 28 May 2025 17:23:41 +0000 (0:00:00.314) 0:00:06.382 ********* 2025-05-28 17:25:23.240749 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:25:23.240760 | orchestrator | 2025-05-28 17:25:23.240771 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-28 17:25:23.240781 | orchestrator | Wednesday 28 May 2025 17:23:41 +0000 (0:00:00.118) 0:00:06.500 ********* 2025-05-28 17:25:23.240792 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:25:23.240803 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:25:23.240814 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:25:23.240824 | orchestrator | 2025-05-28 17:25:23.240835 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-28 17:25:23.240846 | orchestrator | Wednesday 28 May 2025 17:23:42 +0000 (0:00:00.275) 0:00:06.775 ********* 2025-05-28 17:25:23.240857 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:25:23.240868 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:25:23.240878 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:25:23.240889 | orchestrator | 2025-05-28 17:25:23.240900 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-28 17:25:23.240915 | orchestrator | Wednesday 28 May 2025 17:23:42 +0000 (0:00:00.487) 0:00:07.263 ********* 2025-05-28 17:25:23.240927 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:25:23.240938 | orchestrator | 2025-05-28 17:25:23.240949 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-28 17:25:23.240959 | orchestrator | Wednesday 28 May 2025 17:23:42 +0000 (0:00:00.140) 0:00:07.404 ********* 2025-05-28 17:25:23.240970 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:25:23.240981 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:25:23.240992 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:25:23.241002 | orchestrator | 2025-05-28 17:25:23.241013 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-28 17:25:23.241024 | orchestrator | Wednesday 28 May 2025 17:23:43 +0000 (0:00:00.298) 0:00:07.702 ********* 2025-05-28 17:25:23.241034 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:25:23.241045 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:25:23.241056 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:25:23.241067 | orchestrator | 2025-05-28 17:25:23.241078 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-28 17:25:23.241089 | orchestrator | Wednesday 28 May 2025 17:23:43 +0000 (0:00:00.287) 0:00:07.990 ********* 2025-05-28 17:25:23.241099 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:25:23.241110 | orchestrator | 2025-05-28 17:25:23.241121 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-28 17:25:23.241139 | orchestrator | Wednesday 28 May 2025 17:23:43 +0000 (0:00:00.118) 0:00:08.108 ********* 2025-05-28 17:25:23.241150 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:25:23.241160 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:25:23.241171 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:25:23.241182 | orchestrator | 2025-05-28 17:25:23.241247 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-28 17:25:23.241261 | orchestrator | Wednesday 28 May 2025 17:23:43 +0000 (0:00:00.430) 0:00:08.539 ********* 2025-05-28 17:25:23.241272 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:25:23.241283 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:25:23.241294 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:25:23.241304 | orchestrator | 2025-05-28 17:25:23.241322 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-28 17:25:23.241333 | orchestrator | Wednesday 28 May 2025 17:23:44 +0000 (0:00:00.306) 0:00:08.845 ********* 2025-05-28 17:25:23.241344 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:25:23.241355 | orchestrator | 2025-05-28 17:25:23.241366 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-28 17:25:23.241377 | orchestrator | Wednesday 28 May 2025 17:23:44 +0000 (0:00:00.133) 0:00:08.979 ********* 2025-05-28 17:25:23.241388 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:25:23.241399 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:25:23.241410 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:25:23.241421 | orchestrator | 2025-05-28 17:25:23.241432 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-28 17:25:23.241443 | orchestrator | Wednesday 28 May 2025 17:23:44 +0000 (0:00:00.291) 0:00:09.271 ********* 2025-05-28 17:25:23.241454 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:25:23.241465 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:25:23.241476 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:25:23.241487 | orchestrator | 2025-05-28 17:25:23.241498 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-28 17:25:23.241509 | orchestrator | Wednesday 28 May 2025 17:23:44 +0000 (0:00:00.273) 0:00:09.544 ********* 2025-05-28 17:25:23.241520 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:25:23.241531 | orchestrator | 2025-05-28 17:25:23.241542 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-28 17:25:23.241552 | orchestrator | Wednesday 28 May 2025 17:23:45 +0000 (0:00:00.126) 0:00:09.671 ********* 2025-05-28 17:25:23.241564 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:25:23.241574 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:25:23.241585 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:25:23.241596 | orchestrator | 2025-05-28 17:25:23.241607 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-28 17:25:23.241618 | orchestrator | Wednesday 28 May 2025 17:23:45 +0000 (0:00:00.513) 0:00:10.184 ********* 2025-05-28 17:25:23.241629 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:25:23.241640 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:25:23.241651 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:25:23.241661 | orchestrator | 2025-05-28 17:25:23.241672 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-28 17:25:23.241683 | orchestrator | Wednesday 28 May 2025 17:23:45 +0000 (0:00:00.285) 0:00:10.469 ********* 2025-05-28 17:25:23.241694 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:25:23.241705 | orchestrator | 2025-05-28 17:25:23.241716 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-28 17:25:23.241727 | orchestrator | Wednesday 28 May 2025 17:23:46 +0000 (0:00:00.130) 0:00:10.600 ********* 2025-05-28 17:25:23.241737 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:25:23.241748 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:25:23.241759 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:25:23.241770 | orchestrator | 2025-05-28 17:25:23.241781 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-28 17:25:23.241792 | orchestrator | Wednesday 28 May 2025 17:23:46 +0000 (0:00:00.266) 0:00:10.867 ********* 2025-05-28 17:25:23.241810 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:25:23.241821 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:25:23.241832 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:25:23.241843 | orchestrator | 2025-05-28 17:25:23.241854 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-28 17:25:23.241864 | orchestrator | Wednesday 28 May 2025 17:23:46 +0000 (0:00:00.506) 0:00:11.373 ********* 2025-05-28 17:25:23.241875 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:25:23.241886 | orchestrator | 2025-05-28 17:25:23.241897 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-28 17:25:23.241908 | orchestrator | Wednesday 28 May 2025 17:23:46 +0000 (0:00:00.121) 0:00:11.494 ********* 2025-05-28 17:25:23.241919 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:25:23.241929 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:25:23.241940 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:25:23.241951 | orchestrator | 2025-05-28 17:25:23.241968 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-05-28 17:25:23.241979 | orchestrator | Wednesday 28 May 2025 17:23:47 +0000 (0:00:00.299) 0:00:11.793 ********* 2025-05-28 17:25:23.241990 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:25:23.242001 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:25:23.242011 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:25:23.242058 | orchestrator | 2025-05-28 17:25:23.242069 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-05-28 17:25:23.242080 | orchestrator | Wednesday 28 May 2025 17:23:48 +0000 (0:00:01.540) 0:00:13.334 ********* 2025-05-28 17:25:23.242091 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-05-28 17:25:23.242102 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-05-28 17:25:23.242112 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-05-28 17:25:23.242123 | orchestrator | 2025-05-28 17:25:23.242134 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-05-28 17:25:23.242145 | orchestrator | Wednesday 28 May 2025 17:23:50 +0000 (0:00:01.980) 0:00:15.315 ********* 2025-05-28 17:25:23.242156 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-05-28 17:25:23.242166 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-05-28 17:25:23.242177 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-05-28 17:25:23.242188 | orchestrator | 2025-05-28 17:25:23.242224 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-05-28 17:25:23.242234 | orchestrator | Wednesday 28 May 2025 17:23:53 +0000 (0:00:02.398) 0:00:17.713 ********* 2025-05-28 17:25:23.242252 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-05-28 17:25:23.242264 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-05-28 17:25:23.242274 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-05-28 17:25:23.242285 | orchestrator | 2025-05-28 17:25:23.242296 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-05-28 17:25:23.242307 | orchestrator | Wednesday 28 May 2025 17:23:54 +0000 (0:00:01.719) 0:00:19.432 ********* 2025-05-28 17:25:23.242318 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:25:23.242329 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:25:23.242340 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:25:23.242350 | orchestrator | 2025-05-28 17:25:23.242361 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-05-28 17:25:23.242372 | orchestrator | Wednesday 28 May 2025 17:23:55 +0000 (0:00:00.315) 0:00:19.748 ********* 2025-05-28 17:25:23.242397 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:25:23.242408 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:25:23.242419 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:25:23.242430 | orchestrator | 2025-05-28 17:25:23.242441 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-28 17:25:23.242452 | orchestrator | Wednesday 28 May 2025 17:23:55 +0000 (0:00:00.288) 0:00:20.037 ********* 2025-05-28 17:25:23.242462 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:25:23.242474 | orchestrator | 2025-05-28 17:25:23.242484 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-05-28 17:25:23.242495 | orchestrator | Wednesday 28 May 2025 17:23:56 +0000 (0:00:00.775) 0:00:20.812 ********* 2025-05-28 17:25:23.242514 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-28 17:25:23.242537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-28 17:25:23.242563 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-28 17:25:23.242576 | orchestrator | 2025-05-28 17:25:23.242587 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-05-28 17:25:23.242597 | orchestrator | Wednesday 28 May 2025 17:23:57 +0000 (0:00:01.456) 0:00:22.269 ********* 2025-05-28 17:25:23.242619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-28 17:25:23.242640 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:25:23.242658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-28 17:25:23.242676 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:25:23.242688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-28 17:25:23.242706 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:25:23.242717 | orchestrator | 2025-05-28 17:25:23.242728 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-05-28 17:25:23.242738 | orchestrator | Wednesday 28 May 2025 17:23:58 +0000 (0:00:00.599) 0:00:22.868 ********* 2025-05-28 17:25:23.242765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-28 17:25:23.242785 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:25:23.242797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-28 17:25:23.242809 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:25:23.242852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-28 17:25:23.242872 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:25:23.242883 | orchestrator | 2025-05-28 17:25:23.242894 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-05-28 17:25:23.242905 | orchestrator | Wednesday 28 May 2025 17:23:59 +0000 (0:00:01.025) 0:00:23.893 ********* 2025-05-28 17:25:23.242922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-28 17:25:23.242944 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-28 17:25:23.242970 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-28 17:25:23.242982 | orchestrator | 2025-05-28 17:25:23.242993 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-28 17:25:23.243004 | orchestrator | Wednesday 28 May 2025 17:24:00 +0000 (0:00:01.163) 0:00:25.057 ********* 2025-05-28 17:25:23.243015 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:25:23.243026 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:25:23.243036 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:25:23.243054 | orchestrator | 2025-05-28 17:25:23.243065 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-28 17:25:23.243076 | orchestrator | Wednesday 28 May 2025 17:24:00 +0000 (0:00:00.307) 0:00:25.365 ********* 2025-05-28 17:25:23.243087 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:25:23.243098 | orchestrator | 2025-05-28 17:25:23.243108 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-05-28 17:25:23.243119 | orchestrator | Wednesday 28 May 2025 17:24:01 +0000 (0:00:00.650) 0:00:26.016 ********* 2025-05-28 17:25:23.243130 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:25:23.243141 | orchestrator | 2025-05-28 17:25:23.243157 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-05-28 17:25:23.243168 | orchestrator | Wednesday 28 May 2025 17:24:03 +0000 (0:00:02.129) 0:00:28.145 ********* 2025-05-28 17:25:23.243179 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:25:23.243206 | orchestrator | 2025-05-28 17:25:23.243218 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-05-28 17:25:23.243229 | orchestrator | Wednesday 28 May 2025 17:24:05 +0000 (0:00:02.014) 0:00:30.159 ********* 2025-05-28 17:25:23.243240 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:25:23.243250 | orchestrator | 2025-05-28 17:25:23.243261 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-05-28 17:25:23.243272 | orchestrator | Wednesday 28 May 2025 17:24:20 +0000 (0:00:14.739) 0:00:44.899 ********* 2025-05-28 17:25:23.243283 | orchestrator | 2025-05-28 17:25:23.243294 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-05-28 17:25:23.243304 | orchestrator | Wednesday 28 May 2025 17:24:20 +0000 (0:00:00.063) 0:00:44.962 ********* 2025-05-28 17:25:23.243315 | orchestrator | 2025-05-28 17:25:23.243326 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-05-28 17:25:23.243337 | orchestrator | Wednesday 28 May 2025 17:24:20 +0000 (0:00:00.064) 0:00:45.027 ********* 2025-05-28 17:25:23.243347 | orchestrator | 2025-05-28 17:25:23.243358 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-05-28 17:25:23.243369 | orchestrator | Wednesday 28 May 2025 17:24:20 +0000 (0:00:00.068) 0:00:45.095 ********* 2025-05-28 17:25:23.243380 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:25:23.243390 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:25:23.243401 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:25:23.243412 | orchestrator | 2025-05-28 17:25:23.243423 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 17:25:23.243434 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2025-05-28 17:25:23.243445 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-05-28 17:25:23.243456 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-05-28 17:25:23.243466 | orchestrator | 2025-05-28 17:25:23.243477 | orchestrator | 2025-05-28 17:25:23.243488 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 17:25:23.243499 | orchestrator | Wednesday 28 May 2025 17:25:19 +0000 (0:00:59.360) 0:01:44.456 ********* 2025-05-28 17:25:23.243510 | orchestrator | =============================================================================== 2025-05-28 17:25:23.243520 | orchestrator | horizon : Restart horizon container ------------------------------------ 59.36s 2025-05-28 17:25:23.243531 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 14.74s 2025-05-28 17:25:23.243542 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.40s 2025-05-28 17:25:23.243553 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.13s 2025-05-28 17:25:23.243575 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.01s 2025-05-28 17:25:23.243586 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.98s 2025-05-28 17:25:23.243597 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.72s 2025-05-28 17:25:23.243608 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.54s 2025-05-28 17:25:23.243618 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.46s 2025-05-28 17:25:23.243629 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.16s 2025-05-28 17:25:23.243645 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.07s 2025-05-28 17:25:23.243656 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.03s 2025-05-28 17:25:23.243667 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.78s 2025-05-28 17:25:23.243677 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.71s 2025-05-28 17:25:23.243688 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.65s 2025-05-28 17:25:23.243699 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.60s 2025-05-28 17:25:23.243710 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.51s 2025-05-28 17:25:23.243720 | orchestrator | horizon : Update policy file name --------------------------------------- 0.51s 2025-05-28 17:25:23.243731 | orchestrator | horizon : Update policy file name --------------------------------------- 0.49s 2025-05-28 17:25:23.243742 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.49s 2025-05-28 17:25:23.243752 | orchestrator | 2025-05-28 17:25:23 | INFO  | Task 5ad4789a-1168-4780-a06c-a2e94241c756 is in state STARTED 2025-05-28 17:25:23.243764 | orchestrator | 2025-05-28 17:25:23 | INFO  | Task 0857faf7-466d-470d-8183-6e64a7d62bfe is in state STARTED 2025-05-28 17:25:23.243775 | orchestrator | 2025-05-28 17:25:23 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:25:26.286466 | orchestrator | 2025-05-28 17:25:26 | INFO  | Task 5ad4789a-1168-4780-a06c-a2e94241c756 is in state STARTED 2025-05-28 17:25:26.287117 | orchestrator | 2025-05-28 17:25:26 | INFO  | Task 0857faf7-466d-470d-8183-6e64a7d62bfe is in state STARTED 2025-05-28 17:25:26.287148 | orchestrator | 2025-05-28 17:25:26 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:25:29.325389 | orchestrator | 2025-05-28 17:25:29 | INFO  | Task 5ad4789a-1168-4780-a06c-a2e94241c756 is in state STARTED 2025-05-28 17:25:29.325633 | orchestrator | 2025-05-28 17:25:29 | INFO  | Task 0857faf7-466d-470d-8183-6e64a7d62bfe is in state STARTED 2025-05-28 17:25:29.325658 | orchestrator | 2025-05-28 17:25:29 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:25:32.377534 | orchestrator | 2025-05-28 17:25:32 | INFO  | Task 5ad4789a-1168-4780-a06c-a2e94241c756 is in state STARTED 2025-05-28 17:25:32.382348 | orchestrator | 2025-05-28 17:25:32 | INFO  | Task 0857faf7-466d-470d-8183-6e64a7d62bfe is in state STARTED 2025-05-28 17:25:32.382444 | orchestrator | 2025-05-28 17:25:32 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:25:35.431447 | orchestrator | 2025-05-28 17:25:35 | INFO  | Task 5ad4789a-1168-4780-a06c-a2e94241c756 is in state STARTED 2025-05-28 17:25:35.432468 | orchestrator | 2025-05-28 17:25:35 | INFO  | Task 0857faf7-466d-470d-8183-6e64a7d62bfe is in state STARTED 2025-05-28 17:25:35.432499 | orchestrator | 2025-05-28 17:25:35 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:25:38.481578 | orchestrator | 2025-05-28 17:25:38 | INFO  | Task 5ad4789a-1168-4780-a06c-a2e94241c756 is in state STARTED 2025-05-28 17:25:38.483360 | orchestrator | 2025-05-28 17:25:38 | INFO  | Task 0857faf7-466d-470d-8183-6e64a7d62bfe is in state STARTED 2025-05-28 17:25:38.483443 | orchestrator | 2025-05-28 17:25:38 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:25:41.534721 | orchestrator | 2025-05-28 17:25:41 | INFO  | Task 5ad4789a-1168-4780-a06c-a2e94241c756 is in state STARTED 2025-05-28 17:25:41.536684 | orchestrator | 2025-05-28 17:25:41 | INFO  | Task 0857faf7-466d-470d-8183-6e64a7d62bfe is in state STARTED 2025-05-28 17:25:41.536741 | orchestrator | 2025-05-28 17:25:41 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:25:44.587365 | orchestrator | 2025-05-28 17:25:44 | INFO  | Task 5ad4789a-1168-4780-a06c-a2e94241c756 is in state STARTED 2025-05-28 17:25:44.588965 | orchestrator | 2025-05-28 17:25:44 | INFO  | Task 0857faf7-466d-470d-8183-6e64a7d62bfe is in state STARTED 2025-05-28 17:25:44.588993 | orchestrator | 2025-05-28 17:25:44 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:25:47.640707 | orchestrator | 2025-05-28 17:25:47 | INFO  | Task 5ad4789a-1168-4780-a06c-a2e94241c756 is in state STARTED 2025-05-28 17:25:47.642328 | orchestrator | 2025-05-28 17:25:47 | INFO  | Task 0857faf7-466d-470d-8183-6e64a7d62bfe is in state STARTED 2025-05-28 17:25:47.642376 | orchestrator | 2025-05-28 17:25:47 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:25:50.689584 | orchestrator | 2025-05-28 17:25:50 | INFO  | Task 5ad4789a-1168-4780-a06c-a2e94241c756 is in state STARTED 2025-05-28 17:25:50.690425 | orchestrator | 2025-05-28 17:25:50 | INFO  | Task 0857faf7-466d-470d-8183-6e64a7d62bfe is in state STARTED 2025-05-28 17:25:50.690576 | orchestrator | 2025-05-28 17:25:50 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:25:53.751748 | orchestrator | 2025-05-28 17:25:53 | INFO  | Task 5ad4789a-1168-4780-a06c-a2e94241c756 is in state STARTED 2025-05-28 17:25:53.752792 | orchestrator | 2025-05-28 17:25:53 | INFO  | Task 0857faf7-466d-470d-8183-6e64a7d62bfe is in state STARTED 2025-05-28 17:25:53.752827 | orchestrator | 2025-05-28 17:25:53 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:25:56.804729 | orchestrator | 2025-05-28 17:25:56 | INFO  | Task 5ad4789a-1168-4780-a06c-a2e94241c756 is in state STARTED 2025-05-28 17:25:56.806611 | orchestrator | 2025-05-28 17:25:56 | INFO  | Task 0857faf7-466d-470d-8183-6e64a7d62bfe is in state STARTED 2025-05-28 17:25:56.806639 | orchestrator | 2025-05-28 17:25:56 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:25:59.852067 | orchestrator | 2025-05-28 17:25:59 | INFO  | Task 5ad4789a-1168-4780-a06c-a2e94241c756 is in state STARTED 2025-05-28 17:25:59.853626 | orchestrator | 2025-05-28 17:25:59 | INFO  | Task 0857faf7-466d-470d-8183-6e64a7d62bfe is in state STARTED 2025-05-28 17:25:59.854471 | orchestrator | 2025-05-28 17:25:59 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:26:02.906393 | orchestrator | 2025-05-28 17:26:02 | INFO  | Task 5ad4789a-1168-4780-a06c-a2e94241c756 is in state STARTED 2025-05-28 17:26:02.907051 | orchestrator | 2025-05-28 17:26:02 | INFO  | Task 0857faf7-466d-470d-8183-6e64a7d62bfe is in state STARTED 2025-05-28 17:26:02.907084 | orchestrator | 2025-05-28 17:26:02 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:26:05.962543 | orchestrator | 2025-05-28 17:26:05 | INFO  | Task a2cee338-b143-4d42-828c-d647b2353c8d is in state STARTED 2025-05-28 17:26:05.962688 | orchestrator | 2025-05-28 17:26:05 | INFO  | Task 7432d1e9-9d72-44ec-b255-622be1d0ea02 is in state STARTED 2025-05-28 17:26:05.964950 | orchestrator | 2025-05-28 17:26:05 | INFO  | Task 5ad4789a-1168-4780-a06c-a2e94241c756 is in state SUCCESS 2025-05-28 17:26:05.966489 | orchestrator | 2025-05-28 17:26:05 | INFO  | Task 0857faf7-466d-470d-8183-6e64a7d62bfe is in state STARTED 2025-05-28 17:26:05.967930 | orchestrator | 2025-05-28 17:26:05 | INFO  | Task 029ce531-bca3-434d-b6ce-8d2abb3a9626 is in state STARTED 2025-05-28 17:26:05.967961 | orchestrator | 2025-05-28 17:26:05 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:26:09.036684 | orchestrator | 2025-05-28 17:26:09 | INFO  | Task a2cee338-b143-4d42-828c-d647b2353c8d is in state STARTED 2025-05-28 17:26:09.037138 | orchestrator | 2025-05-28 17:26:09 | INFO  | Task 7432d1e9-9d72-44ec-b255-622be1d0ea02 is in state STARTED 2025-05-28 17:26:09.039321 | orchestrator | 2025-05-28 17:26:09 | INFO  | Task 0857faf7-466d-470d-8183-6e64a7d62bfe is in state STARTED 2025-05-28 17:26:09.040231 | orchestrator | 2025-05-28 17:26:09 | INFO  | Task 029ce531-bca3-434d-b6ce-8d2abb3a9626 is in state STARTED 2025-05-28 17:26:09.040378 | orchestrator | 2025-05-28 17:26:09 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:26:12.072792 | orchestrator | 2025-05-28 17:26:12 | INFO  | Task a2cee338-b143-4d42-828c-d647b2353c8d is in state SUCCESS 2025-05-28 17:26:12.075023 | orchestrator | 2025-05-28 17:26:12 | INFO  | Task 7432d1e9-9d72-44ec-b255-622be1d0ea02 is in state STARTED 2025-05-28 17:26:12.076709 | orchestrator | 2025-05-28 17:26:12 | INFO  | Task 628ae488-32c1-4536-8d3a-3f4b270537be is in state STARTED 2025-05-28 17:26:12.079164 | orchestrator | 2025-05-28 17:26:12 | INFO  | Task 4ad7eada-631a-4384-9c67-a6cd37dd95bb is in state STARTED 2025-05-28 17:26:12.079910 | orchestrator | 2025-05-28 17:26:12 | INFO  | Task 0857faf7-466d-470d-8183-6e64a7d62bfe is in state STARTED 2025-05-28 17:26:12.082382 | orchestrator | 2025-05-28 17:26:12 | INFO  | Task 029ce531-bca3-434d-b6ce-8d2abb3a9626 is in state STARTED 2025-05-28 17:26:12.082405 | orchestrator | 2025-05-28 17:26:12 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:26:15.120603 | orchestrator | 2025-05-28 17:26:15 | INFO  | Task 7432d1e9-9d72-44ec-b255-622be1d0ea02 is in state STARTED 2025-05-28 17:26:15.121497 | orchestrator | 2025-05-28 17:26:15 | INFO  | Task 628ae488-32c1-4536-8d3a-3f4b270537be is in state STARTED 2025-05-28 17:26:15.123103 | orchestrator | 2025-05-28 17:26:15 | INFO  | Task 4ad7eada-631a-4384-9c67-a6cd37dd95bb is in state STARTED 2025-05-28 17:26:15.124435 | orchestrator | 2025-05-28 17:26:15 | INFO  | Task 0857faf7-466d-470d-8183-6e64a7d62bfe is in state STARTED 2025-05-28 17:26:15.125935 | orchestrator | 2025-05-28 17:26:15 | INFO  | Task 029ce531-bca3-434d-b6ce-8d2abb3a9626 is in state STARTED 2025-05-28 17:26:15.125984 | orchestrator | 2025-05-28 17:26:15 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:26:18.175047 | orchestrator | 2025-05-28 17:26:18 | INFO  | Task ffc1a8d8-c459-47a4-8999-43321493f5ee is in state STARTED 2025-05-28 17:26:18.175210 | orchestrator | 2025-05-28 17:26:18 | INFO  | Task 7432d1e9-9d72-44ec-b255-622be1d0ea02 is in state STARTED 2025-05-28 17:26:18.175227 | orchestrator | 2025-05-28 17:26:18 | INFO  | Task 628ae488-32c1-4536-8d3a-3f4b270537be is in state STARTED 2025-05-28 17:26:18.175240 | orchestrator | 2025-05-28 17:26:18 | INFO  | Task 4ad7eada-631a-4384-9c67-a6cd37dd95bb is in state STARTED 2025-05-28 17:26:18.177734 | orchestrator | 2025-05-28 17:26:18 | INFO  | Task 0857faf7-466d-470d-8183-6e64a7d62bfe is in state SUCCESS 2025-05-28 17:26:18.179907 | orchestrator | 2025-05-28 17:26:18.180010 | orchestrator | 2025-05-28 17:26:18.180035 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-05-28 17:26:18.180056 | orchestrator | 2025-05-28 17:26:18.180074 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-05-28 17:26:18.180095 | orchestrator | Wednesday 28 May 2025 17:25:14 +0000 (0:00:00.238) 0:00:00.238 ********* 2025-05-28 17:26:18.180154 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-05-28 17:26:18.180208 | orchestrator | 2025-05-28 17:26:18.180221 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-05-28 17:26:18.180232 | orchestrator | Wednesday 28 May 2025 17:25:14 +0000 (0:00:00.217) 0:00:00.455 ********* 2025-05-28 17:26:18.180243 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-05-28 17:26:18.180254 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-05-28 17:26:18.180266 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-05-28 17:26:18.180277 | orchestrator | 2025-05-28 17:26:18.180288 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-05-28 17:26:18.180298 | orchestrator | Wednesday 28 May 2025 17:25:15 +0000 (0:00:01.205) 0:00:01.661 ********* 2025-05-28 17:26:18.180309 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-05-28 17:26:18.180320 | orchestrator | 2025-05-28 17:26:18.180331 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-05-28 17:26:18.180341 | orchestrator | Wednesday 28 May 2025 17:25:17 +0000 (0:00:01.104) 0:00:02.765 ********* 2025-05-28 17:26:18.180432 | orchestrator | changed: [testbed-manager] 2025-05-28 17:26:18.180445 | orchestrator | 2025-05-28 17:26:18.180456 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-05-28 17:26:18.180469 | orchestrator | Wednesday 28 May 2025 17:25:18 +0000 (0:00:00.973) 0:00:03.739 ********* 2025-05-28 17:26:18.180481 | orchestrator | changed: [testbed-manager] 2025-05-28 17:26:18.180493 | orchestrator | 2025-05-28 17:26:18.180506 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-05-28 17:26:18.180519 | orchestrator | Wednesday 28 May 2025 17:25:18 +0000 (0:00:00.874) 0:00:04.613 ********* 2025-05-28 17:26:18.180531 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-05-28 17:26:18.180543 | orchestrator | ok: [testbed-manager] 2025-05-28 17:26:18.180555 | orchestrator | 2025-05-28 17:26:18.180567 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-05-28 17:26:18.180579 | orchestrator | Wednesday 28 May 2025 17:25:55 +0000 (0:00:36.343) 0:00:40.957 ********* 2025-05-28 17:26:18.180592 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-05-28 17:26:18.180604 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-05-28 17:26:18.180616 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-05-28 17:26:18.180628 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-05-28 17:26:18.180640 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-05-28 17:26:18.180652 | orchestrator | 2025-05-28 17:26:18.180665 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-05-28 17:26:18.180677 | orchestrator | Wednesday 28 May 2025 17:25:59 +0000 (0:00:03.953) 0:00:44.911 ********* 2025-05-28 17:26:18.180688 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-05-28 17:26:18.180699 | orchestrator | 2025-05-28 17:26:18.180709 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-05-28 17:26:18.180720 | orchestrator | Wednesday 28 May 2025 17:25:59 +0000 (0:00:00.444) 0:00:45.355 ********* 2025-05-28 17:26:18.180731 | orchestrator | skipping: [testbed-manager] 2025-05-28 17:26:18.180741 | orchestrator | 2025-05-28 17:26:18.180752 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-05-28 17:26:18.180763 | orchestrator | Wednesday 28 May 2025 17:25:59 +0000 (0:00:00.124) 0:00:45.480 ********* 2025-05-28 17:26:18.180773 | orchestrator | skipping: [testbed-manager] 2025-05-28 17:26:18.180784 | orchestrator | 2025-05-28 17:26:18.180794 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-05-28 17:26:18.180805 | orchestrator | Wednesday 28 May 2025 17:26:00 +0000 (0:00:00.307) 0:00:45.787 ********* 2025-05-28 17:26:18.180827 | orchestrator | changed: [testbed-manager] 2025-05-28 17:26:18.180838 | orchestrator | 2025-05-28 17:26:18.180848 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-05-28 17:26:18.180859 | orchestrator | Wednesday 28 May 2025 17:26:01 +0000 (0:00:01.619) 0:00:47.407 ********* 2025-05-28 17:26:18.180870 | orchestrator | changed: [testbed-manager] 2025-05-28 17:26:18.180880 | orchestrator | 2025-05-28 17:26:18.180891 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-05-28 17:26:18.180902 | orchestrator | Wednesday 28 May 2025 17:26:02 +0000 (0:00:00.693) 0:00:48.100 ********* 2025-05-28 17:26:18.180912 | orchestrator | changed: [testbed-manager] 2025-05-28 17:26:18.180923 | orchestrator | 2025-05-28 17:26:18.180952 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-05-28 17:26:18.180964 | orchestrator | Wednesday 28 May 2025 17:26:03 +0000 (0:00:00.579) 0:00:48.680 ********* 2025-05-28 17:26:18.180974 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-05-28 17:26:18.180985 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-05-28 17:26:18.180996 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-05-28 17:26:18.181007 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-05-28 17:26:18.181018 | orchestrator | 2025-05-28 17:26:18.181029 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 17:26:18.181040 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 17:26:18.181052 | orchestrator | 2025-05-28 17:26:18.181063 | orchestrator | 2025-05-28 17:26:18.181141 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 17:26:18.181155 | orchestrator | Wednesday 28 May 2025 17:26:04 +0000 (0:00:01.446) 0:00:50.126 ********* 2025-05-28 17:26:18.181191 | orchestrator | =============================================================================== 2025-05-28 17:26:18.181203 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 36.34s 2025-05-28 17:26:18.181213 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.95s 2025-05-28 17:26:18.181224 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.62s 2025-05-28 17:26:18.181235 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.45s 2025-05-28 17:26:18.181245 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.21s 2025-05-28 17:26:18.181256 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.10s 2025-05-28 17:26:18.181266 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.97s 2025-05-28 17:26:18.181277 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.87s 2025-05-28 17:26:18.181288 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.69s 2025-05-28 17:26:18.181299 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.58s 2025-05-28 17:26:18.181309 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.44s 2025-05-28 17:26:18.181320 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.31s 2025-05-28 17:26:18.181331 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.22s 2025-05-28 17:26:18.181341 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.12s 2025-05-28 17:26:18.181352 | orchestrator | 2025-05-28 17:26:18.181363 | orchestrator | 2025-05-28 17:26:18.181373 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-28 17:26:18.181384 | orchestrator | 2025-05-28 17:26:18.181395 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-28 17:26:18.181405 | orchestrator | Wednesday 28 May 2025 17:26:08 +0000 (0:00:00.175) 0:00:00.175 ********* 2025-05-28 17:26:18.181416 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:26:18.181427 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:26:18.181446 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:26:18.181457 | orchestrator | 2025-05-28 17:26:18.181467 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-28 17:26:18.181478 | orchestrator | Wednesday 28 May 2025 17:26:09 +0000 (0:00:00.286) 0:00:00.461 ********* 2025-05-28 17:26:18.181489 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-05-28 17:26:18.181500 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-05-28 17:26:18.181511 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-05-28 17:26:18.181521 | orchestrator | 2025-05-28 17:26:18.181532 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-05-28 17:26:18.181542 | orchestrator | 2025-05-28 17:26:18.181553 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-05-28 17:26:18.181564 | orchestrator | Wednesday 28 May 2025 17:26:09 +0000 (0:00:00.634) 0:00:01.096 ********* 2025-05-28 17:26:18.181574 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:26:18.181585 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:26:18.181596 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:26:18.181606 | orchestrator | 2025-05-28 17:26:18.181617 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 17:26:18.181629 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 17:26:18.181640 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 17:26:18.181651 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 17:26:18.181662 | orchestrator | 2025-05-28 17:26:18.181672 | orchestrator | 2025-05-28 17:26:18.181683 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 17:26:18.181693 | orchestrator | Wednesday 28 May 2025 17:26:10 +0000 (0:00:00.722) 0:00:01.819 ********* 2025-05-28 17:26:18.181704 | orchestrator | =============================================================================== 2025-05-28 17:26:18.181715 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.72s 2025-05-28 17:26:18.181725 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.63s 2025-05-28 17:26:18.181736 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.29s 2025-05-28 17:26:18.181746 | orchestrator | 2025-05-28 17:26:18.181760 | orchestrator | 2025-05-28 17:26:18.181779 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-28 17:26:18.181798 | orchestrator | 2025-05-28 17:26:18.181825 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-28 17:26:18.181853 | orchestrator | Wednesday 28 May 2025 17:23:35 +0000 (0:00:00.277) 0:00:00.277 ********* 2025-05-28 17:26:18.181875 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:26:18.181895 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:26:18.181912 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:26:18.181923 | orchestrator | 2025-05-28 17:26:18.181934 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-28 17:26:18.181945 | orchestrator | Wednesday 28 May 2025 17:23:36 +0000 (0:00:00.302) 0:00:00.580 ********* 2025-05-28 17:26:18.181956 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-05-28 17:26:18.181966 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-05-28 17:26:18.181977 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-05-28 17:26:18.181988 | orchestrator | 2025-05-28 17:26:18.181999 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-05-28 17:26:18.182009 | orchestrator | 2025-05-28 17:26:18.182116 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-28 17:26:18.182132 | orchestrator | Wednesday 28 May 2025 17:23:36 +0000 (0:00:00.423) 0:00:01.003 ********* 2025-05-28 17:26:18.182143 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:26:18.182233 | orchestrator | 2025-05-28 17:26:18.182247 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-05-28 17:26:18.182258 | orchestrator | Wednesday 28 May 2025 17:23:37 +0000 (0:00:00.671) 0:00:01.674 ********* 2025-05-28 17:26:18.182277 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-28 17:26:18.182295 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-28 17:26:18.182315 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-28 17:26:18.182330 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-28 17:26:18.182486 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-28 17:26:18.182513 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-28 17:26:18.182531 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-28 17:26:18.182552 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-28 17:26:18.182652 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-28 17:26:18.182670 | orchestrator | 2025-05-28 17:26:18.182687 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-05-28 17:26:18.182704 | orchestrator | Wednesday 28 May 2025 17:23:38 +0000 (0:00:01.560) 0:00:03.235 ********* 2025-05-28 17:26:18.182720 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-05-28 17:26:18.182738 | orchestrator | 2025-05-28 17:26:18.182755 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-05-28 17:26:18.182780 | orchestrator | Wednesday 28 May 2025 17:23:39 +0000 (0:00:00.819) 0:00:04.054 ********* 2025-05-28 17:26:18.182797 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:26:18.182813 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:26:18.182829 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:26:18.182846 | orchestrator | 2025-05-28 17:26:18.182862 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-05-28 17:26:18.182890 | orchestrator | Wednesday 28 May 2025 17:23:40 +0000 (0:00:00.442) 0:00:04.496 ********* 2025-05-28 17:26:18.182908 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-28 17:26:18.182925 | orchestrator | 2025-05-28 17:26:18.182940 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-28 17:26:18.182956 | orchestrator | Wednesday 28 May 2025 17:23:40 +0000 (0:00:00.656) 0:00:05.153 ********* 2025-05-28 17:26:18.182971 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:26:18.182986 | orchestrator | 2025-05-28 17:26:18.183012 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-05-28 17:26:18.183029 | orchestrator | Wednesday 28 May 2025 17:23:41 +0000 (0:00:00.532) 0:00:05.686 ********* 2025-05-28 17:26:18.183048 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-28 17:26:18.183067 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-28 17:26:18.183085 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-28 17:26:18.183110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-28 17:26:18.183211 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-28 17:26:18.183234 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-28 17:26:18.183245 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-28 17:26:18.183255 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-28 17:26:18.183265 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-28 17:26:18.183275 | orchestrator | 2025-05-28 17:26:18.183285 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-05-28 17:26:18.183295 | orchestrator | Wednesday 28 May 2025 17:23:44 +0000 (0:00:03.435) 0:00:09.122 ********* 2025-05-28 17:26:18.183311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-05-28 17:26:18.183336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-28 17:26:18.183347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-28 17:26:18.183357 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:26:18.183367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-05-28 17:26:18.183378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-28 17:26:18.183388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-28 17:26:18.183410 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:26:18.183433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-05-28 17:26:18.183444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-28 17:26:18.183455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-28 17:26:18.183464 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:26:18.183474 | orchestrator | 2025-05-28 17:26:18.183484 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-05-28 17:26:18.183494 | orchestrator | Wednesday 28 May 2025 17:23:45 +0000 (0:00:00.535) 0:00:09.658 ********* 2025-05-28 17:26:18.183504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-05-28 17:26:18.183521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-28 17:26:18.183536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-28 17:26:18.183546 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:26:18.183564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-05-28 17:26:18.183574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-28 17:26:18.183584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-28 17:26:18.183600 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:26:18.183617 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-05-28 17:26:18.183656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-28 17:26:18.183691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-28 17:26:18.183708 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:26:18.183724 | orchestrator | 2025-05-28 17:26:18.183741 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-05-28 17:26:18.183758 | orchestrator | Wednesday 28 May 2025 17:23:45 +0000 (0:00:00.757) 0:00:10.415 ********* 2025-05-28 17:26:18.183775 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-28 17:26:18.183794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-28 17:26:18.183832 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-28 17:26:18.183851 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-28 17:26:18.183862 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-28 17:26:18.183872 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-28 17:26:18.183881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-28 17:26:18.183899 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-28 17:26:18.183909 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-28 17:26:18.183919 | orchestrator | 2025-05-28 17:26:18.183928 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-05-28 17:26:18.183938 | orchestrator | Wednesday 28 May 2025 17:23:49 +0000 (0:00:03.518) 0:00:13.934 ********* 2025-05-28 17:26:18.183959 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-28 17:26:18.183971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-28 17:26:18.183981 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-28 17:26:18.183999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-28 17:26:18.184014 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-28 17:26:18.184025 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-28 17:26:18.184042 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-28 17:26:18.184052 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-28 17:26:18.184062 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-28 17:26:18.184102 | orchestrator | 2025-05-28 17:26:18.184112 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-05-28 17:26:18.184122 | orchestrator | Wednesday 28 May 2025 17:23:54 +0000 (0:00:05.107) 0:00:19.041 ********* 2025-05-28 17:26:18.184132 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:26:18.184142 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:26:18.184152 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:26:18.184161 | orchestrator | 2025-05-28 17:26:18.184200 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-05-28 17:26:18.184210 | orchestrator | Wednesday 28 May 2025 17:23:55 +0000 (0:00:01.348) 0:00:20.389 ********* 2025-05-28 17:26:18.184219 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:26:18.184229 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:26:18.184239 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:26:18.184248 | orchestrator | 2025-05-28 17:26:18.184258 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-05-28 17:26:18.184267 | orchestrator | Wednesday 28 May 2025 17:23:56 +0000 (0:00:00.564) 0:00:20.953 ********* 2025-05-28 17:26:18.184277 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:26:18.184286 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:26:18.184296 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:26:18.184305 | orchestrator | 2025-05-28 17:26:18.184314 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-05-28 17:26:18.184324 | orchestrator | Wednesday 28 May 2025 17:23:56 +0000 (0:00:00.453) 0:00:21.407 ********* 2025-05-28 17:26:18.184333 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:26:18.184342 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:26:18.184352 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:26:18.184361 | orchestrator | 2025-05-28 17:26:18.184370 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-05-28 17:26:18.184380 | orchestrator | Wednesday 28 May 2025 17:23:57 +0000 (0:00:00.302) 0:00:21.710 ********* 2025-05-28 17:26:18.184395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-28 17:26:18.184413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-28 17:26:18.184424 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-28 17:26:18.184442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-28 17:26:18.184452 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-28 17:26:18.184463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-28 17:26:18.184480 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-28 17:26:18.184490 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-28 17:26:18.184509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-28 17:26:18.184519 | orchestrator | 2025-05-28 17:26:18.184584 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-28 17:26:18.184597 | orchestrator | Wednesday 28 May 2025 17:23:59 +0000 (0:00:02.288) 0:00:23.999 ********* 2025-05-28 17:26:18.184607 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:26:18.184616 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:26:18.184626 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:26:18.184635 | orchestrator | 2025-05-28 17:26:18.184645 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-05-28 17:26:18.184655 | orchestrator | Wednesday 28 May 2025 17:23:59 +0000 (0:00:00.320) 0:00:24.319 ********* 2025-05-28 17:26:18.184665 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-05-28 17:26:18.184675 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-05-28 17:26:18.184684 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-05-28 17:26:18.184694 | orchestrator | 2025-05-28 17:26:18.184711 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-05-28 17:26:18.184728 | orchestrator | Wednesday 28 May 2025 17:24:01 +0000 (0:00:01.963) 0:00:26.283 ********* 2025-05-28 17:26:18.184745 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-28 17:26:18.184761 | orchestrator | 2025-05-28 17:26:18.184784 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-05-28 17:26:18.184804 | orchestrator | Wednesday 28 May 2025 17:24:02 +0000 (0:00:00.896) 0:00:27.180 ********* 2025-05-28 17:26:18.184825 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:26:18.184848 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:26:18.184972 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:26:18.184996 | orchestrator | 2025-05-28 17:26:18.185007 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-05-28 17:26:18.185016 | orchestrator | Wednesday 28 May 2025 17:24:03 +0000 (0:00:00.494) 0:00:27.675 ********* 2025-05-28 17:26:18.185026 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-28 17:26:18.185035 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-28 17:26:18.185045 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-28 17:26:18.185054 | orchestrator | 2025-05-28 17:26:18.185064 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-05-28 17:26:18.185073 | orchestrator | Wednesday 28 May 2025 17:24:04 +0000 (0:00:00.981) 0:00:28.656 ********* 2025-05-28 17:26:18.185083 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:26:18.185093 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:26:18.185102 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:26:18.185111 | orchestrator | 2025-05-28 17:26:18.185125 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-05-28 17:26:18.185135 | orchestrator | Wednesday 28 May 2025 17:24:04 +0000 (0:00:00.273) 0:00:28.929 ********* 2025-05-28 17:26:18.185154 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-05-28 17:26:18.185189 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-05-28 17:26:18.185201 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-05-28 17:26:18.185210 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-05-28 17:26:18.185220 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-05-28 17:26:18.185240 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-05-28 17:26:18.185250 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-05-28 17:26:18.185260 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-05-28 17:26:18.185269 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-05-28 17:26:18.185279 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-05-28 17:26:18.185288 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-05-28 17:26:18.185297 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-05-28 17:26:18.185307 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-05-28 17:26:18.185316 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-05-28 17:26:18.185326 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-05-28 17:26:18.185336 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-28 17:26:18.185345 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-28 17:26:18.185355 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-28 17:26:18.185364 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-28 17:26:18.185374 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-28 17:26:18.185383 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-28 17:26:18.185393 | orchestrator | 2025-05-28 17:26:18.185403 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-05-28 17:26:18.185412 | orchestrator | Wednesday 28 May 2025 17:24:13 +0000 (0:00:08.512) 0:00:37.441 ********* 2025-05-28 17:26:18.185422 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-28 17:26:18.185431 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-28 17:26:18.185441 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-28 17:26:18.185450 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-28 17:26:18.185460 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-28 17:26:18.185469 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-28 17:26:18.185479 | orchestrator | 2025-05-28 17:26:18.185488 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-05-28 17:26:18.185498 | orchestrator | Wednesday 28 May 2025 17:24:15 +0000 (0:00:02.464) 0:00:39.906 ********* 2025-05-28 17:26:18.185514 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-28 17:26:18.185540 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-28 17:26:18.185552 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-28 17:26:18.185563 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-28 17:26:18.185573 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-28 17:26:18.185589 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-28 17:26:18.185603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-28 17:26:18.185620 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-28 17:26:18.185631 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-28 17:26:18.185640 | orchestrator | 2025-05-28 17:26:18.185650 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-28 17:26:18.185660 | orchestrator | Wednesday 28 May 2025 17:24:17 +0000 (0:00:02.228) 0:00:42.134 ********* 2025-05-28 17:26:18.185669 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:26:18.185679 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:26:18.185689 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:26:18.185698 | orchestrator | 2025-05-28 17:26:18.185708 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-05-28 17:26:18.185769 | orchestrator | Wednesday 28 May 2025 17:24:17 +0000 (0:00:00.272) 0:00:42.407 ********* 2025-05-28 17:26:18.185780 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:26:18.185789 | orchestrator | 2025-05-28 17:26:18.185799 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-05-28 17:26:18.185809 | orchestrator | Wednesday 28 May 2025 17:24:20 +0000 (0:00:02.160) 0:00:44.568 ********* 2025-05-28 17:26:18.185818 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:26:18.185828 | orchestrator | 2025-05-28 17:26:18.185837 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-05-28 17:26:18.185849 | orchestrator | Wednesday 28 May 2025 17:24:22 +0000 (0:00:02.531) 0:00:47.099 ********* 2025-05-28 17:26:18.185875 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:26:18.185892 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:26:18.185908 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:26:18.185923 | orchestrator | 2025-05-28 17:26:18.185943 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-05-28 17:26:18.185965 | orchestrator | Wednesday 28 May 2025 17:24:23 +0000 (0:00:00.870) 0:00:47.970 ********* 2025-05-28 17:26:18.185980 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:26:18.185996 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:26:18.186010 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:26:18.186093 | orchestrator | 2025-05-28 17:26:18.186110 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-05-28 17:26:18.186125 | orchestrator | Wednesday 28 May 2025 17:24:23 +0000 (0:00:00.298) 0:00:48.269 ********* 2025-05-28 17:26:18.186140 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:26:18.186156 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:26:18.186206 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:26:18.186223 | orchestrator | 2025-05-28 17:26:18.186238 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-05-28 17:26:18.186254 | orchestrator | Wednesday 28 May 2025 17:24:24 +0000 (0:00:00.373) 0:00:48.642 ********* 2025-05-28 17:26:18.186276 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:26:18.186296 | orchestrator | 2025-05-28 17:26:18.186313 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-05-28 17:26:18.186329 | orchestrator | Wednesday 28 May 2025 17:24:39 +0000 (0:00:14.808) 0:01:03.450 ********* 2025-05-28 17:26:18.186345 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:26:18.186362 | orchestrator | 2025-05-28 17:26:18.186378 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-05-28 17:26:18.186394 | orchestrator | Wednesday 28 May 2025 17:24:48 +0000 (0:00:09.418) 0:01:12.868 ********* 2025-05-28 17:26:18.186408 | orchestrator | 2025-05-28 17:26:18.186425 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-05-28 17:26:18.186435 | orchestrator | Wednesday 28 May 2025 17:24:48 +0000 (0:00:00.233) 0:01:13.102 ********* 2025-05-28 17:26:18.186445 | orchestrator | 2025-05-28 17:26:18.186455 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-05-28 17:26:18.186465 | orchestrator | Wednesday 28 May 2025 17:24:48 +0000 (0:00:00.060) 0:01:13.162 ********* 2025-05-28 17:26:18.186474 | orchestrator | 2025-05-28 17:26:18.186484 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-05-28 17:26:18.186503 | orchestrator | Wednesday 28 May 2025 17:24:48 +0000 (0:00:00.062) 0:01:13.225 ********* 2025-05-28 17:26:18.186513 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:26:18.186523 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:26:18.186532 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:26:18.186542 | orchestrator | 2025-05-28 17:26:18.186551 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-05-28 17:26:18.186561 | orchestrator | Wednesday 28 May 2025 17:25:10 +0000 (0:00:21.713) 0:01:34.939 ********* 2025-05-28 17:26:18.186571 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:26:18.186580 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:26:18.186590 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:26:18.186599 | orchestrator | 2025-05-28 17:26:18.186609 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-05-28 17:26:18.186619 | orchestrator | Wednesday 28 May 2025 17:25:20 +0000 (0:00:09.940) 0:01:44.879 ********* 2025-05-28 17:26:18.186629 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:26:18.186639 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:26:18.186660 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:26:18.186670 | orchestrator | 2025-05-28 17:26:18.186680 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-28 17:26:18.186690 | orchestrator | Wednesday 28 May 2025 17:25:32 +0000 (0:00:11.690) 0:01:56.569 ********* 2025-05-28 17:26:18.186701 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:26:18.186721 | orchestrator | 2025-05-28 17:26:18.186731 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-05-28 17:26:18.186741 | orchestrator | Wednesday 28 May 2025 17:25:32 +0000 (0:00:00.750) 0:01:57.320 ********* 2025-05-28 17:26:18.186751 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:26:18.186761 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:26:18.186771 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:26:18.186782 | orchestrator | 2025-05-28 17:26:18.186792 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-05-28 17:26:18.186802 | orchestrator | Wednesday 28 May 2025 17:25:33 +0000 (0:00:00.674) 0:01:57.994 ********* 2025-05-28 17:26:18.186812 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:26:18.186821 | orchestrator | 2025-05-28 17:26:18.186831 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-05-28 17:26:18.186841 | orchestrator | Wednesday 28 May 2025 17:25:35 +0000 (0:00:01.747) 0:01:59.742 ********* 2025-05-28 17:26:18.186850 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-05-28 17:26:18.186860 | orchestrator | 2025-05-28 17:26:18.186870 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-05-28 17:26:18.186879 | orchestrator | Wednesday 28 May 2025 17:25:44 +0000 (0:00:09.654) 0:02:09.396 ********* 2025-05-28 17:26:18.186889 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-05-28 17:26:18.186900 | orchestrator | 2025-05-28 17:26:18.186909 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-05-28 17:26:18.186919 | orchestrator | Wednesday 28 May 2025 17:26:05 +0000 (0:00:20.242) 0:02:29.638 ********* 2025-05-28 17:26:18.186929 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-05-28 17:26:18.186940 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-05-28 17:26:18.186949 | orchestrator | 2025-05-28 17:26:18.186959 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-05-28 17:26:18.186968 | orchestrator | Wednesday 28 May 2025 17:26:11 +0000 (0:00:05.908) 0:02:35.547 ********* 2025-05-28 17:26:18.186978 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:26:18.186988 | orchestrator | 2025-05-28 17:26:18.186998 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-05-28 17:26:18.187008 | orchestrator | Wednesday 28 May 2025 17:26:11 +0000 (0:00:00.380) 0:02:35.928 ********* 2025-05-28 17:26:18.187018 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:26:18.187028 | orchestrator | 2025-05-28 17:26:18.187038 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-05-28 17:26:18.187048 | orchestrator | Wednesday 28 May 2025 17:26:11 +0000 (0:00:00.116) 0:02:36.044 ********* 2025-05-28 17:26:18.187057 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:26:18.187067 | orchestrator | 2025-05-28 17:26:18.187078 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-05-28 17:26:18.187089 | orchestrator | Wednesday 28 May 2025 17:26:11 +0000 (0:00:00.121) 0:02:36.165 ********* 2025-05-28 17:26:18.187099 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:26:18.187109 | orchestrator | 2025-05-28 17:26:18.187118 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-05-28 17:26:18.187128 | orchestrator | Wednesday 28 May 2025 17:26:12 +0000 (0:00:00.456) 0:02:36.622 ********* 2025-05-28 17:26:18.187137 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:26:18.187147 | orchestrator | 2025-05-28 17:26:18.187157 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-28 17:26:18.187193 | orchestrator | Wednesday 28 May 2025 17:26:15 +0000 (0:00:03.319) 0:02:39.942 ********* 2025-05-28 17:26:18.187204 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:26:18.187214 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:26:18.187223 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:26:18.187239 | orchestrator | 2025-05-28 17:26:18.187249 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 17:26:18.187259 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-05-28 17:26:18.187270 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-05-28 17:26:18.187285 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-05-28 17:26:18.187295 | orchestrator | 2025-05-28 17:26:18.187305 | orchestrator | 2025-05-28 17:26:18.187314 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 17:26:18.187324 | orchestrator | Wednesday 28 May 2025 17:26:16 +0000 (0:00:00.631) 0:02:40.573 ********* 2025-05-28 17:26:18.187333 | orchestrator | =============================================================================== 2025-05-28 17:26:18.187345 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 21.71s 2025-05-28 17:26:18.187362 | orchestrator | service-ks-register : keystone | Creating services --------------------- 20.24s 2025-05-28 17:26:18.187378 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 14.81s 2025-05-28 17:26:18.187393 | orchestrator | keystone : Restart keystone container ---------------------------------- 11.69s 2025-05-28 17:26:18.187410 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 9.94s 2025-05-28 17:26:18.187435 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint ---- 9.65s 2025-05-28 17:26:18.187451 | orchestrator | keystone : Running Keystone fernet bootstrap container ------------------ 9.42s 2025-05-28 17:26:18.187469 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.51s 2025-05-28 17:26:18.187485 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 5.91s 2025-05-28 17:26:18.187503 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.11s 2025-05-28 17:26:18.187513 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.52s 2025-05-28 17:26:18.187523 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.44s 2025-05-28 17:26:18.187533 | orchestrator | keystone : Creating default user role ----------------------------------- 3.32s 2025-05-28 17:26:18.187542 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.53s 2025-05-28 17:26:18.187552 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.46s 2025-05-28 17:26:18.187562 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.29s 2025-05-28 17:26:18.187572 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.23s 2025-05-28 17:26:18.187581 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.16s 2025-05-28 17:26:18.187591 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.96s 2025-05-28 17:26:18.187601 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.75s 2025-05-28 17:26:18.187610 | orchestrator | 2025-05-28 17:26:18 | INFO  | Task 029ce531-bca3-434d-b6ce-8d2abb3a9626 is in state STARTED 2025-05-28 17:26:18.187620 | orchestrator | 2025-05-28 17:26:18 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:26:21.236852 | orchestrator | 2025-05-28 17:26:21 | INFO  | Task ffc1a8d8-c459-47a4-8999-43321493f5ee is in state STARTED 2025-05-28 17:26:21.236996 | orchestrator | 2025-05-28 17:26:21 | INFO  | Task 7432d1e9-9d72-44ec-b255-622be1d0ea02 is in state STARTED 2025-05-28 17:26:21.238726 | orchestrator | 2025-05-28 17:26:21 | INFO  | Task 628ae488-32c1-4536-8d3a-3f4b270537be is in state STARTED 2025-05-28 17:26:21.239591 | orchestrator | 2025-05-28 17:26:21 | INFO  | Task 4ad7eada-631a-4384-9c67-a6cd37dd95bb is in state STARTED 2025-05-28 17:26:21.240677 | orchestrator | 2025-05-28 17:26:21 | INFO  | Task 029ce531-bca3-434d-b6ce-8d2abb3a9626 is in state STARTED 2025-05-28 17:26:21.240700 | orchestrator | 2025-05-28 17:26:21 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:26:24.289138 | orchestrator | 2025-05-28 17:26:24 | INFO  | Task ffc1a8d8-c459-47a4-8999-43321493f5ee is in state STARTED 2025-05-28 17:26:24.289315 | orchestrator | 2025-05-28 17:26:24 | INFO  | Task 7432d1e9-9d72-44ec-b255-622be1d0ea02 is in state STARTED 2025-05-28 17:26:24.291469 | orchestrator | 2025-05-28 17:26:24 | INFO  | Task 628ae488-32c1-4536-8d3a-3f4b270537be is in state STARTED 2025-05-28 17:26:24.292387 | orchestrator | 2025-05-28 17:26:24 | INFO  | Task 4ad7eada-631a-4384-9c67-a6cd37dd95bb is in state STARTED 2025-05-28 17:26:24.293102 | orchestrator | 2025-05-28 17:26:24 | INFO  | Task 029ce531-bca3-434d-b6ce-8d2abb3a9626 is in state STARTED 2025-05-28 17:26:24.293576 | orchestrator | 2025-05-28 17:26:24 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:26:27.340760 | orchestrator | 2025-05-28 17:26:27 | INFO  | Task ffc1a8d8-c459-47a4-8999-43321493f5ee is in state STARTED 2025-05-28 17:26:27.342601 | orchestrator | 2025-05-28 17:26:27 | INFO  | Task 7432d1e9-9d72-44ec-b255-622be1d0ea02 is in state STARTED 2025-05-28 17:26:27.342639 | orchestrator | 2025-05-28 17:26:27 | INFO  | Task 628ae488-32c1-4536-8d3a-3f4b270537be is in state STARTED 2025-05-28 17:26:27.342652 | orchestrator | 2025-05-28 17:26:27 | INFO  | Task 4ad7eada-631a-4384-9c67-a6cd37dd95bb is in state STARTED 2025-05-28 17:26:27.342663 | orchestrator | 2025-05-28 17:26:27 | INFO  | Task 029ce531-bca3-434d-b6ce-8d2abb3a9626 is in state STARTED 2025-05-28 17:26:27.342699 | orchestrator | 2025-05-28 17:26:27 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:26:30.405352 | orchestrator | 2025-05-28 17:26:30 | INFO  | Task ffc1a8d8-c459-47a4-8999-43321493f5ee is in state STARTED 2025-05-28 17:26:30.405918 | orchestrator | 2025-05-28 17:26:30 | INFO  | Task 7432d1e9-9d72-44ec-b255-622be1d0ea02 is in state STARTED 2025-05-28 17:26:30.406943 | orchestrator | 2025-05-28 17:26:30 | INFO  | Task 628ae488-32c1-4536-8d3a-3f4b270537be is in state STARTED 2025-05-28 17:26:30.408712 | orchestrator | 2025-05-28 17:26:30 | INFO  | Task 4ad7eada-631a-4384-9c67-a6cd37dd95bb is in state STARTED 2025-05-28 17:26:30.410211 | orchestrator | 2025-05-28 17:26:30 | INFO  | Task 029ce531-bca3-434d-b6ce-8d2abb3a9626 is in state STARTED 2025-05-28 17:26:30.410394 | orchestrator | 2025-05-28 17:26:30 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:26:33.454006 | orchestrator | 2025-05-28 17:26:33 | INFO  | Task ffc1a8d8-c459-47a4-8999-43321493f5ee is in state STARTED 2025-05-28 17:26:33.455045 | orchestrator | 2025-05-28 17:26:33 | INFO  | Task 7432d1e9-9d72-44ec-b255-622be1d0ea02 is in state STARTED 2025-05-28 17:26:33.456298 | orchestrator | 2025-05-28 17:26:33 | INFO  | Task 628ae488-32c1-4536-8d3a-3f4b270537be is in state STARTED 2025-05-28 17:26:33.457842 | orchestrator | 2025-05-28 17:26:33 | INFO  | Task 4ad7eada-631a-4384-9c67-a6cd37dd95bb is in state STARTED 2025-05-28 17:26:33.459146 | orchestrator | 2025-05-28 17:26:33 | INFO  | Task 029ce531-bca3-434d-b6ce-8d2abb3a9626 is in state STARTED 2025-05-28 17:26:33.459483 | orchestrator | 2025-05-28 17:26:33 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:26:36.537867 | orchestrator | 2025-05-28 17:26:36 | INFO  | Task ffc1a8d8-c459-47a4-8999-43321493f5ee is in state STARTED 2025-05-28 17:26:36.539338 | orchestrator | 2025-05-28 17:26:36 | INFO  | Task 7432d1e9-9d72-44ec-b255-622be1d0ea02 is in state STARTED 2025-05-28 17:26:36.541377 | orchestrator | 2025-05-28 17:26:36 | INFO  | Task 628ae488-32c1-4536-8d3a-3f4b270537be is in state STARTED 2025-05-28 17:26:36.541918 | orchestrator | 2025-05-28 17:26:36 | INFO  | Task 4ad7eada-631a-4384-9c67-a6cd37dd95bb is in state STARTED 2025-05-28 17:26:36.542801 | orchestrator | 2025-05-28 17:26:36 | INFO  | Task 029ce531-bca3-434d-b6ce-8d2abb3a9626 is in state STARTED 2025-05-28 17:26:36.542828 | orchestrator | 2025-05-28 17:26:36 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:26:39.569211 | orchestrator | 2025-05-28 17:26:39 | INFO  | Task ffc1a8d8-c459-47a4-8999-43321493f5ee is in state STARTED 2025-05-28 17:26:39.569343 | orchestrator | 2025-05-28 17:26:39 | INFO  | Task 7432d1e9-9d72-44ec-b255-622be1d0ea02 is in state STARTED 2025-05-28 17:26:39.570604 | orchestrator | 2025-05-28 17:26:39 | INFO  | Task 628ae488-32c1-4536-8d3a-3f4b270537be is in state STARTED 2025-05-28 17:26:39.571004 | orchestrator | 2025-05-28 17:26:39 | INFO  | Task 4ad7eada-631a-4384-9c67-a6cd37dd95bb is in state STARTED 2025-05-28 17:26:39.571600 | orchestrator | 2025-05-28 17:26:39 | INFO  | Task 029ce531-bca3-434d-b6ce-8d2abb3a9626 is in state STARTED 2025-05-28 17:26:39.571626 | orchestrator | 2025-05-28 17:26:39 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:26:42.609863 | orchestrator | 2025-05-28 17:26:42 | INFO  | Task ffc1a8d8-c459-47a4-8999-43321493f5ee is in state STARTED 2025-05-28 17:26:42.609975 | orchestrator | 2025-05-28 17:26:42 | INFO  | Task 7432d1e9-9d72-44ec-b255-622be1d0ea02 is in state STARTED 2025-05-28 17:26:42.610339 | orchestrator | 2025-05-28 17:26:42 | INFO  | Task 628ae488-32c1-4536-8d3a-3f4b270537be is in state STARTED 2025-05-28 17:26:42.610761 | orchestrator | 2025-05-28 17:26:42 | INFO  | Task 4ad7eada-631a-4384-9c67-a6cd37dd95bb is in state STARTED 2025-05-28 17:26:42.611384 | orchestrator | 2025-05-28 17:26:42 | INFO  | Task 029ce531-bca3-434d-b6ce-8d2abb3a9626 is in state STARTED 2025-05-28 17:26:42.611420 | orchestrator | 2025-05-28 17:26:42 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:26:45.637674 | orchestrator | 2025-05-28 17:26:45 | INFO  | Task ffc1a8d8-c459-47a4-8999-43321493f5ee is in state STARTED 2025-05-28 17:26:45.637822 | orchestrator | 2025-05-28 17:26:45 | INFO  | Task 7432d1e9-9d72-44ec-b255-622be1d0ea02 is in state STARTED 2025-05-28 17:26:45.638413 | orchestrator | 2025-05-28 17:26:45 | INFO  | Task 628ae488-32c1-4536-8d3a-3f4b270537be is in state STARTED 2025-05-28 17:26:45.639024 | orchestrator | 2025-05-28 17:26:45 | INFO  | Task 4ad7eada-631a-4384-9c67-a6cd37dd95bb is in state STARTED 2025-05-28 17:26:45.639789 | orchestrator | 2025-05-28 17:26:45 | INFO  | Task 029ce531-bca3-434d-b6ce-8d2abb3a9626 is in state STARTED 2025-05-28 17:26:45.639813 | orchestrator | 2025-05-28 17:26:45 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:26:48.678177 | orchestrator | 2025-05-28 17:26:48 | INFO  | Task ffc1a8d8-c459-47a4-8999-43321493f5ee is in state STARTED 2025-05-28 17:26:48.678352 | orchestrator | 2025-05-28 17:26:48 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:26:48.678457 | orchestrator | 2025-05-28 17:26:48 | INFO  | Task 7432d1e9-9d72-44ec-b255-622be1d0ea02 is in state STARTED 2025-05-28 17:26:48.679416 | orchestrator | 2025-05-28 17:26:48 | INFO  | Task 628ae488-32c1-4536-8d3a-3f4b270537be is in state STARTED 2025-05-28 17:26:48.679561 | orchestrator | 2025-05-28 17:26:48 | INFO  | Task 4ad7eada-631a-4384-9c67-a6cd37dd95bb is in state SUCCESS 2025-05-28 17:26:48.680055 | orchestrator | 2025-05-28 17:26:48 | INFO  | Task 029ce531-bca3-434d-b6ce-8d2abb3a9626 is in state STARTED 2025-05-28 17:26:48.680113 | orchestrator | 2025-05-28 17:26:48 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:26:51.724418 | orchestrator | 2025-05-28 17:26:51 | INFO  | Task ffc1a8d8-c459-47a4-8999-43321493f5ee is in state STARTED 2025-05-28 17:26:51.724549 | orchestrator | 2025-05-28 17:26:51 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:26:51.724563 | orchestrator | 2025-05-28 17:26:51 | INFO  | Task 7432d1e9-9d72-44ec-b255-622be1d0ea02 is in state STARTED 2025-05-28 17:26:51.726118 | orchestrator | 2025-05-28 17:26:51 | INFO  | Task 628ae488-32c1-4536-8d3a-3f4b270537be is in state STARTED 2025-05-28 17:26:51.729321 | orchestrator | 2025-05-28 17:26:51 | INFO  | Task 029ce531-bca3-434d-b6ce-8d2abb3a9626 is in state STARTED 2025-05-28 17:26:51.731426 | orchestrator | 2025-05-28 17:26:51 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:26:54.768532 | orchestrator | 2025-05-28 17:26:54 | INFO  | Task ffc1a8d8-c459-47a4-8999-43321493f5ee is in state STARTED 2025-05-28 17:26:54.768848 | orchestrator | 2025-05-28 17:26:54 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:26:54.770111 | orchestrator | 2025-05-28 17:26:54 | INFO  | Task 7432d1e9-9d72-44ec-b255-622be1d0ea02 is in state STARTED 2025-05-28 17:26:54.770165 | orchestrator | 2025-05-28 17:26:54 | INFO  | Task 628ae488-32c1-4536-8d3a-3f4b270537be is in state STARTED 2025-05-28 17:26:54.770849 | orchestrator | 2025-05-28 17:26:54 | INFO  | Task 029ce531-bca3-434d-b6ce-8d2abb3a9626 is in state STARTED 2025-05-28 17:26:54.770870 | orchestrator | 2025-05-28 17:26:54 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:26:57.813083 | orchestrator | 2025-05-28 17:26:57 | INFO  | Task ffc1a8d8-c459-47a4-8999-43321493f5ee is in state STARTED 2025-05-28 17:26:57.813349 | orchestrator | 2025-05-28 17:26:57 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:26:57.813371 | orchestrator | 2025-05-28 17:26:57 | INFO  | Task 7432d1e9-9d72-44ec-b255-622be1d0ea02 is in state STARTED 2025-05-28 17:26:57.813398 | orchestrator | 2025-05-28 17:26:57 | INFO  | Task 628ae488-32c1-4536-8d3a-3f4b270537be is in state STARTED 2025-05-28 17:26:57.814001 | orchestrator | 2025-05-28 17:26:57 | INFO  | Task 029ce531-bca3-434d-b6ce-8d2abb3a9626 is in state STARTED 2025-05-28 17:26:57.814078 | orchestrator | 2025-05-28 17:26:57 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:27:00.852667 | orchestrator | 2025-05-28 17:27:00 | INFO  | Task ffc1a8d8-c459-47a4-8999-43321493f5ee is in state STARTED 2025-05-28 17:27:00.853188 | orchestrator | 2025-05-28 17:27:00 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:27:00.854340 | orchestrator | 2025-05-28 17:27:00 | INFO  | Task 7432d1e9-9d72-44ec-b255-622be1d0ea02 is in state STARTED 2025-05-28 17:27:00.855027 | orchestrator | 2025-05-28 17:27:00 | INFO  | Task 628ae488-32c1-4536-8d3a-3f4b270537be is in state STARTED 2025-05-28 17:27:00.857143 | orchestrator | 2025-05-28 17:27:00 | INFO  | Task 029ce531-bca3-434d-b6ce-8d2abb3a9626 is in state STARTED 2025-05-28 17:27:00.857468 | orchestrator | 2025-05-28 17:27:00 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:27:03.888496 | orchestrator | 2025-05-28 17:27:03 | INFO  | Task ffc1a8d8-c459-47a4-8999-43321493f5ee is in state STARTED 2025-05-28 17:27:03.888603 | orchestrator | 2025-05-28 17:27:03 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:27:03.888939 | orchestrator | 2025-05-28 17:27:03 | INFO  | Task 7432d1e9-9d72-44ec-b255-622be1d0ea02 is in state STARTED 2025-05-28 17:27:03.889460 | orchestrator | 2025-05-28 17:27:03 | INFO  | Task 628ae488-32c1-4536-8d3a-3f4b270537be is in state STARTED 2025-05-28 17:27:03.890060 | orchestrator | 2025-05-28 17:27:03 | INFO  | Task 029ce531-bca3-434d-b6ce-8d2abb3a9626 is in state STARTED 2025-05-28 17:27:03.890084 | orchestrator | 2025-05-28 17:27:03 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:27:06.923742 | orchestrator | 2025-05-28 17:27:06 | INFO  | Task ffc1a8d8-c459-47a4-8999-43321493f5ee is in state STARTED 2025-05-28 17:27:06.923867 | orchestrator | 2025-05-28 17:27:06 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:27:06.924357 | orchestrator | 2025-05-28 17:27:06 | INFO  | Task 7432d1e9-9d72-44ec-b255-622be1d0ea02 is in state STARTED 2025-05-28 17:27:06.924961 | orchestrator | 2025-05-28 17:27:06 | INFO  | Task 628ae488-32c1-4536-8d3a-3f4b270537be is in state STARTED 2025-05-28 17:27:06.925655 | orchestrator | 2025-05-28 17:27:06 | INFO  | Task 029ce531-bca3-434d-b6ce-8d2abb3a9626 is in state STARTED 2025-05-28 17:27:06.925683 | orchestrator | 2025-05-28 17:27:06 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:27:09.964766 | orchestrator | 2025-05-28 17:27:09 | INFO  | Task ffc1a8d8-c459-47a4-8999-43321493f5ee is in state STARTED 2025-05-28 17:27:09.964913 | orchestrator | 2025-05-28 17:27:09 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:27:09.965041 | orchestrator | 2025-05-28 17:27:09 | INFO  | Task 7432d1e9-9d72-44ec-b255-622be1d0ea02 is in state STARTED 2025-05-28 17:27:09.965711 | orchestrator | 2025-05-28 17:27:09 | INFO  | Task 628ae488-32c1-4536-8d3a-3f4b270537be is in state STARTED 2025-05-28 17:27:09.966203 | orchestrator | 2025-05-28 17:27:09 | INFO  | Task 029ce531-bca3-434d-b6ce-8d2abb3a9626 is in state STARTED 2025-05-28 17:27:09.966228 | orchestrator | 2025-05-28 17:27:09 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:27:13.003090 | orchestrator | 2025-05-28 17:27:13 | INFO  | Task ffc1a8d8-c459-47a4-8999-43321493f5ee is in state STARTED 2025-05-28 17:27:13.004351 | orchestrator | 2025-05-28 17:27:13 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:27:13.004968 | orchestrator | 2025-05-28 17:27:13 | INFO  | Task 7432d1e9-9d72-44ec-b255-622be1d0ea02 is in state STARTED 2025-05-28 17:27:13.005581 | orchestrator | 2025-05-28 17:27:13 | INFO  | Task 628ae488-32c1-4536-8d3a-3f4b270537be is in state STARTED 2025-05-28 17:27:13.011218 | orchestrator | 2025-05-28 17:27:13 | INFO  | Task 029ce531-bca3-434d-b6ce-8d2abb3a9626 is in state STARTED 2025-05-28 17:27:13.011275 | orchestrator | 2025-05-28 17:27:13 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:27:16.036715 | orchestrator | 2025-05-28 17:27:16 | INFO  | Task ffc1a8d8-c459-47a4-8999-43321493f5ee is in state STARTED 2025-05-28 17:27:16.042075 | orchestrator | 2025-05-28 17:27:16 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:27:16.043432 | orchestrator | 2025-05-28 17:27:16 | INFO  | Task 7432d1e9-9d72-44ec-b255-622be1d0ea02 is in state STARTED 2025-05-28 17:27:16.047553 | orchestrator | 2025-05-28 17:27:16 | INFO  | Task 628ae488-32c1-4536-8d3a-3f4b270537be is in state STARTED 2025-05-28 17:27:16.048236 | orchestrator | 2025-05-28 17:27:16 | INFO  | Task 029ce531-bca3-434d-b6ce-8d2abb3a9626 is in state STARTED 2025-05-28 17:27:16.048263 | orchestrator | 2025-05-28 17:27:16 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:27:19.075767 | orchestrator | 2025-05-28 17:27:19 | INFO  | Task ffc1a8d8-c459-47a4-8999-43321493f5ee is in state STARTED 2025-05-28 17:27:19.076167 | orchestrator | 2025-05-28 17:27:19 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:27:19.076248 | orchestrator | 2025-05-28 17:27:19 | INFO  | Task 7432d1e9-9d72-44ec-b255-622be1d0ea02 is in state STARTED 2025-05-28 17:27:19.076261 | orchestrator | 2025-05-28 17:27:19 | INFO  | Task 628ae488-32c1-4536-8d3a-3f4b270537be is in state STARTED 2025-05-28 17:27:19.076643 | orchestrator | 2025-05-28 17:27:19 | INFO  | Task 029ce531-bca3-434d-b6ce-8d2abb3a9626 is in state STARTED 2025-05-28 17:27:19.076667 | orchestrator | 2025-05-28 17:27:19 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:27:22.104027 | orchestrator | 2025-05-28 17:27:22 | INFO  | Task ffc1a8d8-c459-47a4-8999-43321493f5ee is in state STARTED 2025-05-28 17:27:22.104240 | orchestrator | 2025-05-28 17:27:22 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:27:22.104448 | orchestrator | 2025-05-28 17:27:22 | INFO  | Task 7432d1e9-9d72-44ec-b255-622be1d0ea02 is in state STARTED 2025-05-28 17:27:22.107829 | orchestrator | 2025-05-28 17:27:22 | INFO  | Task 628ae488-32c1-4536-8d3a-3f4b270537be is in state STARTED 2025-05-28 17:27:22.107955 | orchestrator | 2025-05-28 17:27:22 | INFO  | Task 029ce531-bca3-434d-b6ce-8d2abb3a9626 is in state STARTED 2025-05-28 17:27:22.107970 | orchestrator | 2025-05-28 17:27:22 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:27:25.147086 | orchestrator | 2025-05-28 17:27:25 | INFO  | Task ffc1a8d8-c459-47a4-8999-43321493f5ee is in state STARTED 2025-05-28 17:27:25.147274 | orchestrator | 2025-05-28 17:27:25 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:27:25.147605 | orchestrator | 2025-05-28 17:27:25 | INFO  | Task 7432d1e9-9d72-44ec-b255-622be1d0ea02 is in state STARTED 2025-05-28 17:27:25.148031 | orchestrator | 2025-05-28 17:27:25 | INFO  | Task 628ae488-32c1-4536-8d3a-3f4b270537be is in state STARTED 2025-05-28 17:27:25.148567 | orchestrator | 2025-05-28 17:27:25 | INFO  | Task 029ce531-bca3-434d-b6ce-8d2abb3a9626 is in state STARTED 2025-05-28 17:27:25.148598 | orchestrator | 2025-05-28 17:27:25 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:27:28.173557 | orchestrator | 2025-05-28 17:27:28 | INFO  | Task ffc1a8d8-c459-47a4-8999-43321493f5ee is in state STARTED 2025-05-28 17:27:28.173688 | orchestrator | 2025-05-28 17:27:28 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:27:28.173704 | orchestrator | 2025-05-28 17:27:28 | INFO  | Task 7432d1e9-9d72-44ec-b255-622be1d0ea02 is in state STARTED 2025-05-28 17:27:28.173871 | orchestrator | 2025-05-28 17:27:28 | INFO  | Task 628ae488-32c1-4536-8d3a-3f4b270537be is in state STARTED 2025-05-28 17:27:28.174280 | orchestrator | 2025-05-28 17:27:28 | INFO  | Task 029ce531-bca3-434d-b6ce-8d2abb3a9626 is in state SUCCESS 2025-05-28 17:27:28.174459 | orchestrator | 2025-05-28 17:27:28 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:27:28.174814 | orchestrator | 2025-05-28 17:27:28.174844 | orchestrator | 2025-05-28 17:27:28.174865 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-28 17:27:28.174884 | orchestrator | 2025-05-28 17:27:28.174900 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-28 17:27:28.174913 | orchestrator | Wednesday 28 May 2025 17:26:15 +0000 (0:00:00.238) 0:00:00.238 ********* 2025-05-28 17:27:28.174924 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:27:28.174936 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:27:28.174947 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:27:28.174957 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:27:28.174968 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:27:28.174978 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:27:28.174989 | orchestrator | ok: [testbed-manager] 2025-05-28 17:27:28.175032 | orchestrator | 2025-05-28 17:27:28.175043 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-28 17:27:28.175060 | orchestrator | Wednesday 28 May 2025 17:26:17 +0000 (0:00:01.033) 0:00:01.271 ********* 2025-05-28 17:27:28.175079 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-05-28 17:27:28.175130 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-05-28 17:27:28.175159 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-05-28 17:27:28.175176 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-05-28 17:27:28.175194 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-05-28 17:27:28.175212 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-05-28 17:27:28.175231 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-05-28 17:27:28.175249 | orchestrator | 2025-05-28 17:27:28.175261 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-05-28 17:27:28.175272 | orchestrator | 2025-05-28 17:27:28.175283 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-05-28 17:27:28.175294 | orchestrator | Wednesday 28 May 2025 17:26:17 +0000 (0:00:00.685) 0:00:01.957 ********* 2025-05-28 17:27:28.175305 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-05-28 17:27:28.175318 | orchestrator | 2025-05-28 17:27:28.175328 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-05-28 17:27:28.175339 | orchestrator | Wednesday 28 May 2025 17:26:20 +0000 (0:00:02.479) 0:00:04.436 ********* 2025-05-28 17:27:28.175350 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2025-05-28 17:27:28.175360 | orchestrator | 2025-05-28 17:27:28.175371 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-05-28 17:27:28.175381 | orchestrator | Wednesday 28 May 2025 17:26:24 +0000 (0:00:03.880) 0:00:08.317 ********* 2025-05-28 17:27:28.175393 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-05-28 17:27:28.175406 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-05-28 17:27:28.175417 | orchestrator | 2025-05-28 17:27:28.175428 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-05-28 17:27:28.175458 | orchestrator | Wednesday 28 May 2025 17:26:29 +0000 (0:00:05.578) 0:00:13.895 ********* 2025-05-28 17:27:28.175471 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-28 17:27:28.175483 | orchestrator | 2025-05-28 17:27:28.175495 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-05-28 17:27:28.175507 | orchestrator | Wednesday 28 May 2025 17:26:32 +0000 (0:00:02.752) 0:00:16.647 ********* 2025-05-28 17:27:28.175520 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-28 17:27:28.175531 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2025-05-28 17:27:28.175543 | orchestrator | 2025-05-28 17:27:28.175559 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-05-28 17:27:28.175576 | orchestrator | Wednesday 28 May 2025 17:26:36 +0000 (0:00:03.774) 0:00:20.422 ********* 2025-05-28 17:27:28.175594 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-28 17:27:28.175612 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2025-05-28 17:27:28.175632 | orchestrator | 2025-05-28 17:27:28.175645 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-05-28 17:27:28.175657 | orchestrator | Wednesday 28 May 2025 17:26:42 +0000 (0:00:06.065) 0:00:26.487 ********* 2025-05-28 17:27:28.175669 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2025-05-28 17:27:28.175681 | orchestrator | 2025-05-28 17:27:28.175693 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 17:27:28.175718 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 17:27:28.175730 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 17:27:28.175743 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 17:27:28.175755 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 17:27:28.175767 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 17:27:28.175794 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 17:27:28.175807 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 17:27:28.175817 | orchestrator | 2025-05-28 17:27:28.175828 | orchestrator | 2025-05-28 17:27:28.175838 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 17:27:28.175849 | orchestrator | Wednesday 28 May 2025 17:26:47 +0000 (0:00:04.873) 0:00:31.360 ********* 2025-05-28 17:27:28.175860 | orchestrator | =============================================================================== 2025-05-28 17:27:28.175870 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.07s 2025-05-28 17:27:28.175881 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 5.58s 2025-05-28 17:27:28.175892 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 4.87s 2025-05-28 17:27:28.175902 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.88s 2025-05-28 17:27:28.175913 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.77s 2025-05-28 17:27:28.175923 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 2.75s 2025-05-28 17:27:28.175934 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 2.48s 2025-05-28 17:27:28.175945 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.03s 2025-05-28 17:27:28.175955 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.69s 2025-05-28 17:27:28.175965 | orchestrator | 2025-05-28 17:27:28.175976 | orchestrator | 2025-05-28 17:27:28.175987 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-05-28 17:27:28.175997 | orchestrator | 2025-05-28 17:27:28.176008 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-05-28 17:27:28.176018 | orchestrator | Wednesday 28 May 2025 17:26:08 +0000 (0:00:00.268) 0:00:00.268 ********* 2025-05-28 17:27:28.176029 | orchestrator | changed: [testbed-manager] 2025-05-28 17:27:28.176041 | orchestrator | 2025-05-28 17:27:28.176060 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-05-28 17:27:28.176078 | orchestrator | Wednesday 28 May 2025 17:26:10 +0000 (0:00:02.122) 0:00:02.391 ********* 2025-05-28 17:27:28.176095 | orchestrator | changed: [testbed-manager] 2025-05-28 17:27:28.176153 | orchestrator | 2025-05-28 17:27:28.176171 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-05-28 17:27:28.176188 | orchestrator | Wednesday 28 May 2025 17:26:12 +0000 (0:00:01.110) 0:00:03.502 ********* 2025-05-28 17:27:28.176199 | orchestrator | changed: [testbed-manager] 2025-05-28 17:27:28.176209 | orchestrator | 2025-05-28 17:27:28.176220 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-05-28 17:27:28.176231 | orchestrator | Wednesday 28 May 2025 17:26:13 +0000 (0:00:01.050) 0:00:04.552 ********* 2025-05-28 17:27:28.176241 | orchestrator | changed: [testbed-manager] 2025-05-28 17:27:28.176252 | orchestrator | 2025-05-28 17:27:28.176262 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-05-28 17:27:28.176282 | orchestrator | Wednesday 28 May 2025 17:26:14 +0000 (0:00:01.085) 0:00:05.637 ********* 2025-05-28 17:27:28.176293 | orchestrator | changed: [testbed-manager] 2025-05-28 17:27:28.176303 | orchestrator | 2025-05-28 17:27:28.176322 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-05-28 17:27:28.176333 | orchestrator | Wednesday 28 May 2025 17:26:15 +0000 (0:00:00.949) 0:00:06.587 ********* 2025-05-28 17:27:28.176343 | orchestrator | changed: [testbed-manager] 2025-05-28 17:27:28.176354 | orchestrator | 2025-05-28 17:27:28.176365 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-05-28 17:27:28.176375 | orchestrator | Wednesday 28 May 2025 17:26:16 +0000 (0:00:00.873) 0:00:07.461 ********* 2025-05-28 17:27:28.176386 | orchestrator | changed: [testbed-manager] 2025-05-28 17:27:28.176396 | orchestrator | 2025-05-28 17:27:28.176407 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-05-28 17:27:28.176418 | orchestrator | Wednesday 28 May 2025 17:26:17 +0000 (0:00:01.071) 0:00:08.532 ********* 2025-05-28 17:27:28.176428 | orchestrator | changed: [testbed-manager] 2025-05-28 17:27:28.176439 | orchestrator | 2025-05-28 17:27:28.176449 | orchestrator | TASK [Create admin user] ******************************************************* 2025-05-28 17:27:28.176460 | orchestrator | Wednesday 28 May 2025 17:26:18 +0000 (0:00:00.945) 0:00:09.478 ********* 2025-05-28 17:27:28.176470 | orchestrator | changed: [testbed-manager] 2025-05-28 17:27:28.176481 | orchestrator | 2025-05-28 17:27:28.176491 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-05-28 17:27:28.176502 | orchestrator | Wednesday 28 May 2025 17:27:03 +0000 (0:00:45.207) 0:00:54.686 ********* 2025-05-28 17:27:28.176512 | orchestrator | skipping: [testbed-manager] 2025-05-28 17:27:28.176523 | orchestrator | 2025-05-28 17:27:28.176533 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-05-28 17:27:28.176544 | orchestrator | 2025-05-28 17:27:28.176554 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-05-28 17:27:28.176565 | orchestrator | Wednesday 28 May 2025 17:27:03 +0000 (0:00:00.140) 0:00:54.827 ********* 2025-05-28 17:27:28.176575 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:27:28.176586 | orchestrator | 2025-05-28 17:27:28.176597 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-05-28 17:27:28.176607 | orchestrator | 2025-05-28 17:27:28.176618 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-05-28 17:27:28.176628 | orchestrator | Wednesday 28 May 2025 17:27:14 +0000 (0:00:11.491) 0:01:06.318 ********* 2025-05-28 17:27:28.176639 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:27:28.176649 | orchestrator | 2025-05-28 17:27:28.176660 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-05-28 17:27:28.176670 | orchestrator | 2025-05-28 17:27:28.176681 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-05-28 17:27:28.176692 | orchestrator | Wednesday 28 May 2025 17:27:26 +0000 (0:00:11.186) 0:01:17.504 ********* 2025-05-28 17:27:28.176702 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:27:28.176713 | orchestrator | 2025-05-28 17:27:28.176731 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 17:27:28.176743 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-28 17:27:28.176754 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 17:27:28.176765 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 17:27:28.176776 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 17:27:28.176786 | orchestrator | 2025-05-28 17:27:28.176797 | orchestrator | 2025-05-28 17:27:28.176815 | orchestrator | 2025-05-28 17:27:28.176826 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 17:27:28.176836 | orchestrator | Wednesday 28 May 2025 17:27:27 +0000 (0:00:01.123) 0:01:18.628 ********* 2025-05-28 17:27:28.176847 | orchestrator | =============================================================================== 2025-05-28 17:27:28.176857 | orchestrator | Create admin user ------------------------------------------------------ 45.21s 2025-05-28 17:27:28.176868 | orchestrator | Restart ceph manager service ------------------------------------------- 23.80s 2025-05-28 17:27:28.176878 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 2.12s 2025-05-28 17:27:28.176889 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.11s 2025-05-28 17:27:28.176899 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.09s 2025-05-28 17:27:28.176910 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.07s 2025-05-28 17:27:28.176920 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.05s 2025-05-28 17:27:28.176931 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 0.95s 2025-05-28 17:27:28.176941 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 0.95s 2025-05-28 17:27:28.176952 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.87s 2025-05-28 17:27:28.176963 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.14s 2025-05-28 17:27:31.200793 | orchestrator | 2025-05-28 17:27:31 | INFO  | Task ffc1a8d8-c459-47a4-8999-43321493f5ee is in state STARTED 2025-05-28 17:27:31.200923 | orchestrator | 2025-05-28 17:27:31 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:27:31.201229 | orchestrator | 2025-05-28 17:27:31 | INFO  | Task 7432d1e9-9d72-44ec-b255-622be1d0ea02 is in state STARTED 2025-05-28 17:27:31.201776 | orchestrator | 2025-05-28 17:27:31 | INFO  | Task 628ae488-32c1-4536-8d3a-3f4b270537be is in state STARTED 2025-05-28 17:27:31.201871 | orchestrator | 2025-05-28 17:27:31 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:27:34.226263 | orchestrator | 2025-05-28 17:27:34 | INFO  | Task ffc1a8d8-c459-47a4-8999-43321493f5ee is in state STARTED 2025-05-28 17:27:34.226395 | orchestrator | 2025-05-28 17:27:34 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:27:34.226698 | orchestrator | 2025-05-28 17:27:34 | INFO  | Task 7432d1e9-9d72-44ec-b255-622be1d0ea02 is in state STARTED 2025-05-28 17:27:34.227344 | orchestrator | 2025-05-28 17:27:34 | INFO  | Task 628ae488-32c1-4536-8d3a-3f4b270537be is in state STARTED 2025-05-28 17:27:34.227392 | orchestrator | 2025-05-28 17:27:34 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:27:37.252476 | orchestrator | 2025-05-28 17:27:37 | INFO  | Task ffc1a8d8-c459-47a4-8999-43321493f5ee is in state STARTED 2025-05-28 17:27:37.252594 | orchestrator | 2025-05-28 17:27:37 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:27:37.252859 | orchestrator | 2025-05-28 17:27:37 | INFO  | Task 7432d1e9-9d72-44ec-b255-622be1d0ea02 is in state STARTED 2025-05-28 17:27:37.253947 | orchestrator | 2025-05-28 17:27:37 | INFO  | Task 628ae488-32c1-4536-8d3a-3f4b270537be is in state STARTED 2025-05-28 17:27:37.253972 | orchestrator | 2025-05-28 17:27:37 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:27:40.284514 | orchestrator | 2025-05-28 17:27:40 | INFO  | Task ffc1a8d8-c459-47a4-8999-43321493f5ee is in state STARTED 2025-05-28 17:27:40.285365 | orchestrator | 2025-05-28 17:27:40 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:27:40.285404 | orchestrator | 2025-05-28 17:27:40 | INFO  | Task 7432d1e9-9d72-44ec-b255-622be1d0ea02 is in state STARTED 2025-05-28 17:27:40.286849 | orchestrator | 2025-05-28 17:27:40 | INFO  | Task 628ae488-32c1-4536-8d3a-3f4b270537be is in state STARTED 2025-05-28 17:27:40.290309 | orchestrator | 2025-05-28 17:27:40 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:27:43.318507 | orchestrator | 2025-05-28 17:27:43 | INFO  | Task ffc1a8d8-c459-47a4-8999-43321493f5ee is in state STARTED 2025-05-28 17:27:43.318642 | orchestrator | 2025-05-28 17:27:43 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:27:43.318657 | orchestrator | 2025-05-28 17:27:43 | INFO  | Task 7432d1e9-9d72-44ec-b255-622be1d0ea02 is in state STARTED 2025-05-28 17:27:43.318669 | orchestrator | 2025-05-28 17:27:43 | INFO  | Task 628ae488-32c1-4536-8d3a-3f4b270537be is in state STARTED 2025-05-28 17:27:43.318680 | orchestrator | 2025-05-28 17:27:43 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:27:46.354994 | orchestrator | 2025-05-28 17:27:46 | INFO  | Task ffc1a8d8-c459-47a4-8999-43321493f5ee is in state STARTED 2025-05-28 17:27:46.357724 | orchestrator | 2025-05-28 17:27:46 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:27:46.358855 | orchestrator | 2025-05-28 17:27:46 | INFO  | Task 7432d1e9-9d72-44ec-b255-622be1d0ea02 is in state STARTED 2025-05-28 17:27:46.363459 | orchestrator | 2025-05-28 17:27:46 | INFO  | Task 628ae488-32c1-4536-8d3a-3f4b270537be is in state STARTED 2025-05-28 17:27:46.363711 | orchestrator | 2025-05-28 17:27:46 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:27:49.406892 | orchestrator | 2025-05-28 17:27:49 | INFO  | Task ffc1a8d8-c459-47a4-8999-43321493f5ee is in state STARTED 2025-05-28 17:27:49.409248 | orchestrator | 2025-05-28 17:27:49 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:27:49.411543 | orchestrator | 2025-05-28 17:27:49 | INFO  | Task 7432d1e9-9d72-44ec-b255-622be1d0ea02 is in state STARTED 2025-05-28 17:27:49.413624 | orchestrator | 2025-05-28 17:27:49 | INFO  | Task 628ae488-32c1-4536-8d3a-3f4b270537be is in state STARTED 2025-05-28 17:27:49.413659 | orchestrator | 2025-05-28 17:27:49 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:27:52.451608 | orchestrator | 2025-05-28 17:27:52 | INFO  | Task ffc1a8d8-c459-47a4-8999-43321493f5ee is in state STARTED 2025-05-28 17:27:52.453436 | orchestrator | 2025-05-28 17:27:52 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:27:52.454575 | orchestrator | 2025-05-28 17:27:52 | INFO  | Task 7432d1e9-9d72-44ec-b255-622be1d0ea02 is in state STARTED 2025-05-28 17:27:52.455486 | orchestrator | 2025-05-28 17:27:52 | INFO  | Task 628ae488-32c1-4536-8d3a-3f4b270537be is in state STARTED 2025-05-28 17:27:52.455532 | orchestrator | 2025-05-28 17:27:52 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:27:55.503368 | orchestrator | 2025-05-28 17:27:55 | INFO  | Task ffc1a8d8-c459-47a4-8999-43321493f5ee is in state STARTED 2025-05-28 17:27:55.507923 | orchestrator | 2025-05-28 17:27:55 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:27:55.509859 | orchestrator | 2025-05-28 17:27:55 | INFO  | Task 7432d1e9-9d72-44ec-b255-622be1d0ea02 is in state STARTED 2025-05-28 17:27:55.512296 | orchestrator | 2025-05-28 17:27:55 | INFO  | Task 628ae488-32c1-4536-8d3a-3f4b270537be is in state STARTED 2025-05-28 17:27:55.512338 | orchestrator | 2025-05-28 17:27:55 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:27:58.568489 | orchestrator | 2025-05-28 17:27:58 | INFO  | Task ffc1a8d8-c459-47a4-8999-43321493f5ee is in state STARTED 2025-05-28 17:27:58.571357 | orchestrator | 2025-05-28 17:27:58 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:27:58.574697 | orchestrator | 2025-05-28 17:27:58 | INFO  | Task 7432d1e9-9d72-44ec-b255-622be1d0ea02 is in state STARTED 2025-05-28 17:27:58.575841 | orchestrator | 2025-05-28 17:27:58 | INFO  | Task 628ae488-32c1-4536-8d3a-3f4b270537be is in state STARTED 2025-05-28 17:27:58.575944 | orchestrator | 2025-05-28 17:27:58 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:28:01.622265 | orchestrator | 2025-05-28 17:28:01 | INFO  | Task ffc1a8d8-c459-47a4-8999-43321493f5ee is in state STARTED 2025-05-28 17:28:01.622579 | orchestrator | 2025-05-28 17:28:01 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:28:01.622978 | orchestrator | 2025-05-28 17:28:01 | INFO  | Task 7432d1e9-9d72-44ec-b255-622be1d0ea02 is in state STARTED 2025-05-28 17:28:01.623630 | orchestrator | 2025-05-28 17:28:01 | INFO  | Task 628ae488-32c1-4536-8d3a-3f4b270537be is in state STARTED 2025-05-28 17:28:01.623752 | orchestrator | 2025-05-28 17:28:01 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:28:04.662990 | orchestrator | 2025-05-28 17:28:04 | INFO  | Task ffc1a8d8-c459-47a4-8999-43321493f5ee is in state STARTED 2025-05-28 17:28:04.663147 | orchestrator | 2025-05-28 17:28:04 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:28:04.663713 | orchestrator | 2025-05-28 17:28:04 | INFO  | Task 7432d1e9-9d72-44ec-b255-622be1d0ea02 is in state STARTED 2025-05-28 17:28:04.664520 | orchestrator | 2025-05-28 17:28:04 | INFO  | Task 628ae488-32c1-4536-8d3a-3f4b270537be is in state STARTED 2025-05-28 17:28:04.665382 | orchestrator | 2025-05-28 17:28:04 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:28:07.702258 | orchestrator | 2025-05-28 17:28:07 | INFO  | Task ffc1a8d8-c459-47a4-8999-43321493f5ee is in state STARTED 2025-05-28 17:28:07.702688 | orchestrator | 2025-05-28 17:28:07 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:28:07.704859 | orchestrator | 2025-05-28 17:28:07 | INFO  | Task 7432d1e9-9d72-44ec-b255-622be1d0ea02 is in state STARTED 2025-05-28 17:28:07.704892 | orchestrator | 2025-05-28 17:28:07 | INFO  | Task 628ae488-32c1-4536-8d3a-3f4b270537be is in state STARTED 2025-05-28 17:28:07.704905 | orchestrator | 2025-05-28 17:28:07 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:28:10.748157 | orchestrator | 2025-05-28 17:28:10 | INFO  | Task ffc1a8d8-c459-47a4-8999-43321493f5ee is in state STARTED 2025-05-28 17:28:10.749964 | orchestrator | 2025-05-28 17:28:10 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:28:10.751116 | orchestrator | 2025-05-28 17:28:10 | INFO  | Task 7432d1e9-9d72-44ec-b255-622be1d0ea02 is in state STARTED 2025-05-28 17:28:10.752877 | orchestrator | 2025-05-28 17:28:10 | INFO  | Task 628ae488-32c1-4536-8d3a-3f4b270537be is in state STARTED 2025-05-28 17:28:10.752922 | orchestrator | 2025-05-28 17:28:10 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:28:13.799633 | orchestrator | 2025-05-28 17:28:13 | INFO  | Task ffc1a8d8-c459-47a4-8999-43321493f5ee is in state STARTED 2025-05-28 17:28:13.801416 | orchestrator | 2025-05-28 17:28:13 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:28:13.803278 | orchestrator | 2025-05-28 17:28:13 | INFO  | Task 7432d1e9-9d72-44ec-b255-622be1d0ea02 is in state STARTED 2025-05-28 17:28:13.804538 | orchestrator | 2025-05-28 17:28:13 | INFO  | Task 628ae488-32c1-4536-8d3a-3f4b270537be is in state STARTED 2025-05-28 17:28:13.804661 | orchestrator | 2025-05-28 17:28:13 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:28:16.852392 | orchestrator | 2025-05-28 17:28:16 | INFO  | Task ffc1a8d8-c459-47a4-8999-43321493f5ee is in state STARTED 2025-05-28 17:28:16.854613 | orchestrator | 2025-05-28 17:28:16 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:28:16.856643 | orchestrator | 2025-05-28 17:28:16 | INFO  | Task 7432d1e9-9d72-44ec-b255-622be1d0ea02 is in state STARTED 2025-05-28 17:28:16.858550 | orchestrator | 2025-05-28 17:28:16 | INFO  | Task 628ae488-32c1-4536-8d3a-3f4b270537be is in state STARTED 2025-05-28 17:28:16.858580 | orchestrator | 2025-05-28 17:28:16 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:28:19.913755 | orchestrator | 2025-05-28 17:28:19 | INFO  | Task ffc1a8d8-c459-47a4-8999-43321493f5ee is in state STARTED 2025-05-28 17:28:19.914604 | orchestrator | 2025-05-28 17:28:19 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:28:19.915911 | orchestrator | 2025-05-28 17:28:19 | INFO  | Task 7432d1e9-9d72-44ec-b255-622be1d0ea02 is in state STARTED 2025-05-28 17:28:19.917371 | orchestrator | 2025-05-28 17:28:19 | INFO  | Task 628ae488-32c1-4536-8d3a-3f4b270537be is in state STARTED 2025-05-28 17:28:19.917558 | orchestrator | 2025-05-28 17:28:19 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:28:22.962480 | orchestrator | 2025-05-28 17:28:22 | INFO  | Task ffc1a8d8-c459-47a4-8999-43321493f5ee is in state STARTED 2025-05-28 17:28:22.962633 | orchestrator | 2025-05-28 17:28:22 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:28:22.965425 | orchestrator | 2025-05-28 17:28:22 | INFO  | Task 7432d1e9-9d72-44ec-b255-622be1d0ea02 is in state STARTED 2025-05-28 17:28:22.965452 | orchestrator | 2025-05-28 17:28:22 | INFO  | Task 628ae488-32c1-4536-8d3a-3f4b270537be is in state STARTED 2025-05-28 17:28:22.965465 | orchestrator | 2025-05-28 17:28:22 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:28:25.999520 | orchestrator | 2025-05-28 17:28:25 | INFO  | Task ffc1a8d8-c459-47a4-8999-43321493f5ee is in state STARTED 2025-05-28 17:28:26.000328 | orchestrator | 2025-05-28 17:28:25 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:28:26.000967 | orchestrator | 2025-05-28 17:28:26 | INFO  | Task 7432d1e9-9d72-44ec-b255-622be1d0ea02 is in state STARTED 2025-05-28 17:28:26.002774 | orchestrator | 2025-05-28 17:28:26 | INFO  | Task 628ae488-32c1-4536-8d3a-3f4b270537be is in state STARTED 2025-05-28 17:28:26.002805 | orchestrator | 2025-05-28 17:28:26 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:28:29.050890 | orchestrator | 2025-05-28 17:28:29 | INFO  | Task ffc1a8d8-c459-47a4-8999-43321493f5ee is in state STARTED 2025-05-28 17:28:29.051141 | orchestrator | 2025-05-28 17:28:29 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:28:29.051662 | orchestrator | 2025-05-28 17:28:29 | INFO  | Task 7432d1e9-9d72-44ec-b255-622be1d0ea02 is in state STARTED 2025-05-28 17:28:29.052392 | orchestrator | 2025-05-28 17:28:29 | INFO  | Task 628ae488-32c1-4536-8d3a-3f4b270537be is in state STARTED 2025-05-28 17:28:29.052449 | orchestrator | 2025-05-28 17:28:29 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:28:32.091840 | orchestrator | 2025-05-28 17:28:32 | INFO  | Task ffc1a8d8-c459-47a4-8999-43321493f5ee is in state STARTED 2025-05-28 17:28:32.091963 | orchestrator | 2025-05-28 17:28:32 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:28:32.092560 | orchestrator | 2025-05-28 17:28:32 | INFO  | Task 7432d1e9-9d72-44ec-b255-622be1d0ea02 is in state STARTED 2025-05-28 17:28:32.094164 | orchestrator | 2025-05-28 17:28:32 | INFO  | Task 628ae488-32c1-4536-8d3a-3f4b270537be is in state STARTED 2025-05-28 17:28:32.095394 | orchestrator | 2025-05-28 17:28:32 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:28:35.127698 | orchestrator | 2025-05-28 17:28:35 | INFO  | Task ffc1a8d8-c459-47a4-8999-43321493f5ee is in state STARTED 2025-05-28 17:28:35.127811 | orchestrator | 2025-05-28 17:28:35 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:28:35.128006 | orchestrator | 2025-05-28 17:28:35 | INFO  | Task 7432d1e9-9d72-44ec-b255-622be1d0ea02 is in state STARTED 2025-05-28 17:28:35.128842 | orchestrator | 2025-05-28 17:28:35 | INFO  | Task 628ae488-32c1-4536-8d3a-3f4b270537be is in state STARTED 2025-05-28 17:28:35.129107 | orchestrator | 2025-05-28 17:28:35 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:28:38.162404 | orchestrator | 2025-05-28 17:28:38 | INFO  | Task ffc1a8d8-c459-47a4-8999-43321493f5ee is in state STARTED 2025-05-28 17:28:38.165181 | orchestrator | 2025-05-28 17:28:38 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:28:38.165220 | orchestrator | 2025-05-28 17:28:38 | INFO  | Task 7432d1e9-9d72-44ec-b255-622be1d0ea02 is in state STARTED 2025-05-28 17:28:38.165234 | orchestrator | 2025-05-28 17:28:38 | INFO  | Task 628ae488-32c1-4536-8d3a-3f4b270537be is in state STARTED 2025-05-28 17:28:38.165246 | orchestrator | 2025-05-28 17:28:38 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:28:41.202200 | orchestrator | 2025-05-28 17:28:41 | INFO  | Task ffc1a8d8-c459-47a4-8999-43321493f5ee is in state STARTED 2025-05-28 17:28:41.202663 | orchestrator | 2025-05-28 17:28:41 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:28:41.203925 | orchestrator | 2025-05-28 17:28:41 | INFO  | Task 7432d1e9-9d72-44ec-b255-622be1d0ea02 is in state STARTED 2025-05-28 17:28:41.207741 | orchestrator | 2025-05-28 17:28:41 | INFO  | Task 628ae488-32c1-4536-8d3a-3f4b270537be is in state STARTED 2025-05-28 17:28:41.207902 | orchestrator | 2025-05-28 17:28:41 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:28:44.252326 | orchestrator | 2025-05-28 17:28:44 | INFO  | Task ffc1a8d8-c459-47a4-8999-43321493f5ee is in state STARTED 2025-05-28 17:28:44.255259 | orchestrator | 2025-05-28 17:28:44 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:28:44.255997 | orchestrator | 2025-05-28 17:28:44 | INFO  | Task 7432d1e9-9d72-44ec-b255-622be1d0ea02 is in state STARTED 2025-05-28 17:28:44.256801 | orchestrator | 2025-05-28 17:28:44 | INFO  | Task 628ae488-32c1-4536-8d3a-3f4b270537be is in state STARTED 2025-05-28 17:28:44.256827 | orchestrator | 2025-05-28 17:28:44 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:28:47.292370 | orchestrator | 2025-05-28 17:28:47 | INFO  | Task ffc1a8d8-c459-47a4-8999-43321493f5ee is in state STARTED 2025-05-28 17:28:47.294948 | orchestrator | 2025-05-28 17:28:47 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:28:47.297846 | orchestrator | 2025-05-28 17:28:47 | INFO  | Task 7432d1e9-9d72-44ec-b255-622be1d0ea02 is in state STARTED 2025-05-28 17:28:47.301156 | orchestrator | 2025-05-28 17:28:47 | INFO  | Task 628ae488-32c1-4536-8d3a-3f4b270537be is in state STARTED 2025-05-28 17:28:47.303197 | orchestrator | 2025-05-28 17:28:47 | INFO  | Task 4dcc401f-b13d-4ebe-8122-b338e30a94ae is in state STARTED 2025-05-28 17:28:47.303224 | orchestrator | 2025-05-28 17:28:47 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:28:50.355295 | orchestrator | 2025-05-28 17:28:50 | INFO  | Task ffc1a8d8-c459-47a4-8999-43321493f5ee is in state STARTED 2025-05-28 17:28:50.356595 | orchestrator | 2025-05-28 17:28:50 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:28:50.360698 | orchestrator | 2025-05-28 17:28:50 | INFO  | Task 7432d1e9-9d72-44ec-b255-622be1d0ea02 is in state STARTED 2025-05-28 17:28:50.363218 | orchestrator | 2025-05-28 17:28:50 | INFO  | Task 628ae488-32c1-4536-8d3a-3f4b270537be is in state STARTED 2025-05-28 17:28:50.365035 | orchestrator | 2025-05-28 17:28:50 | INFO  | Task 4dcc401f-b13d-4ebe-8122-b338e30a94ae is in state STARTED 2025-05-28 17:28:50.365108 | orchestrator | 2025-05-28 17:28:50 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:28:53.424614 | orchestrator | 2025-05-28 17:28:53 | INFO  | Task ffc1a8d8-c459-47a4-8999-43321493f5ee is in state STARTED 2025-05-28 17:28:53.427434 | orchestrator | 2025-05-28 17:28:53 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:28:53.430693 | orchestrator | 2025-05-28 17:28:53 | INFO  | Task 7432d1e9-9d72-44ec-b255-622be1d0ea02 is in state STARTED 2025-05-28 17:28:53.430795 | orchestrator | 2025-05-28 17:28:53 | INFO  | Task 628ae488-32c1-4536-8d3a-3f4b270537be is in state STARTED 2025-05-28 17:28:53.431545 | orchestrator | 2025-05-28 17:28:53 | INFO  | Task 4dcc401f-b13d-4ebe-8122-b338e30a94ae is in state STARTED 2025-05-28 17:28:53.431558 | orchestrator | 2025-05-28 17:28:53 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:28:56.492321 | orchestrator | 2025-05-28 17:28:56 | INFO  | Task ffc1a8d8-c459-47a4-8999-43321493f5ee is in state STARTED 2025-05-28 17:28:56.493907 | orchestrator | 2025-05-28 17:28:56 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:28:56.496049 | orchestrator | 2025-05-28 17:28:56 | INFO  | Task 7432d1e9-9d72-44ec-b255-622be1d0ea02 is in state STARTED 2025-05-28 17:28:56.497959 | orchestrator | 2025-05-28 17:28:56 | INFO  | Task 628ae488-32c1-4536-8d3a-3f4b270537be is in state STARTED 2025-05-28 17:28:56.499477 | orchestrator | 2025-05-28 17:28:56 | INFO  | Task 4dcc401f-b13d-4ebe-8122-b338e30a94ae is in state STARTED 2025-05-28 17:28:56.499677 | orchestrator | 2025-05-28 17:28:56 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:28:59.556915 | orchestrator | 2025-05-28 17:28:59 | INFO  | Task ffc1a8d8-c459-47a4-8999-43321493f5ee is in state STARTED 2025-05-28 17:28:59.558969 | orchestrator | 2025-05-28 17:28:59 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:28:59.560920 | orchestrator | 2025-05-28 17:28:59 | INFO  | Task 7432d1e9-9d72-44ec-b255-622be1d0ea02 is in state STARTED 2025-05-28 17:28:59.563278 | orchestrator | 2025-05-28 17:28:59 | INFO  | Task 628ae488-32c1-4536-8d3a-3f4b270537be is in state STARTED 2025-05-28 17:28:59.564404 | orchestrator | 2025-05-28 17:28:59 | INFO  | Task 4dcc401f-b13d-4ebe-8122-b338e30a94ae is in state STARTED 2025-05-28 17:28:59.564428 | orchestrator | 2025-05-28 17:28:59 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:29:02.610428 | orchestrator | 2025-05-28 17:29:02 | INFO  | Task ffc1a8d8-c459-47a4-8999-43321493f5ee is in state STARTED 2025-05-28 17:29:02.611010 | orchestrator | 2025-05-28 17:29:02 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:29:02.612826 | orchestrator | 2025-05-28 17:29:02 | INFO  | Task 7432d1e9-9d72-44ec-b255-622be1d0ea02 is in state STARTED 2025-05-28 17:29:02.617885 | orchestrator | 2025-05-28 17:29:02 | INFO  | Task 628ae488-32c1-4536-8d3a-3f4b270537be is in state STARTED 2025-05-28 17:29:02.618755 | orchestrator | 2025-05-28 17:29:02 | INFO  | Task 4dcc401f-b13d-4ebe-8122-b338e30a94ae is in state STARTED 2025-05-28 17:29:02.618827 | orchestrator | 2025-05-28 17:29:02 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:29:05.676710 | orchestrator | 2025-05-28 17:29:05 | INFO  | Task ffc1a8d8-c459-47a4-8999-43321493f5ee is in state STARTED 2025-05-28 17:29:05.678810 | orchestrator | 2025-05-28 17:29:05 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:29:05.681144 | orchestrator | 2025-05-28 17:29:05 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:29:05.683585 | orchestrator | 2025-05-28 17:29:05 | INFO  | Task 7432d1e9-9d72-44ec-b255-622be1d0ea02 is in state STARTED 2025-05-28 17:29:05.683611 | orchestrator | 2025-05-28 17:29:05 | INFO  | Task 628ae488-32c1-4536-8d3a-3f4b270537be is in state SUCCESS 2025-05-28 17:29:05.685519 | orchestrator | 2025-05-28 17:29:05.685558 | orchestrator | 2025-05-28 17:29:05.685570 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-28 17:29:05.685582 | orchestrator | 2025-05-28 17:29:05.685594 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-28 17:29:05.685606 | orchestrator | Wednesday 28 May 2025 17:26:16 +0000 (0:00:00.437) 0:00:00.437 ********* 2025-05-28 17:29:05.685617 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:29:05.685629 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:29:05.685640 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:29:05.685651 | orchestrator | 2025-05-28 17:29:05.685661 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-28 17:29:05.685672 | orchestrator | Wednesday 28 May 2025 17:26:16 +0000 (0:00:00.429) 0:00:00.866 ********* 2025-05-28 17:29:05.685705 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-05-28 17:29:05.685717 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-05-28 17:29:05.685728 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-05-28 17:29:05.685739 | orchestrator | 2025-05-28 17:29:05.685749 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-05-28 17:29:05.685760 | orchestrator | 2025-05-28 17:29:05.685771 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-28 17:29:05.685782 | orchestrator | Wednesday 28 May 2025 17:26:16 +0000 (0:00:00.341) 0:00:01.208 ********* 2025-05-28 17:29:05.685793 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:29:05.685805 | orchestrator | 2025-05-28 17:29:05.685815 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-05-28 17:29:05.685826 | orchestrator | Wednesday 28 May 2025 17:26:17 +0000 (0:00:00.679) 0:00:01.887 ********* 2025-05-28 17:29:05.685837 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-05-28 17:29:05.685847 | orchestrator | 2025-05-28 17:29:05.685858 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-05-28 17:29:05.685869 | orchestrator | Wednesday 28 May 2025 17:26:21 +0000 (0:00:03.910) 0:00:05.798 ********* 2025-05-28 17:29:05.685880 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-05-28 17:29:05.685891 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-05-28 17:29:05.685902 | orchestrator | 2025-05-28 17:29:05.685914 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-05-28 17:29:05.685945 | orchestrator | Wednesday 28 May 2025 17:26:27 +0000 (0:00:05.817) 0:00:11.616 ********* 2025-05-28 17:29:05.685956 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-05-28 17:29:05.685967 | orchestrator | 2025-05-28 17:29:05.685978 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-05-28 17:29:05.685988 | orchestrator | Wednesday 28 May 2025 17:26:30 +0000 (0:00:02.930) 0:00:14.546 ********* 2025-05-28 17:29:05.686000 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-28 17:29:05.686115 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-05-28 17:29:05.686131 | orchestrator | 2025-05-28 17:29:05.686142 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-05-28 17:29:05.686153 | orchestrator | Wednesday 28 May 2025 17:26:33 +0000 (0:00:03.301) 0:00:17.848 ********* 2025-05-28 17:29:05.686163 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-28 17:29:05.686174 | orchestrator | 2025-05-28 17:29:05.686185 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-05-28 17:29:05.686196 | orchestrator | Wednesday 28 May 2025 17:26:36 +0000 (0:00:03.285) 0:00:21.133 ********* 2025-05-28 17:29:05.686207 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-05-28 17:29:05.686217 | orchestrator | 2025-05-28 17:29:05.686228 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-05-28 17:29:05.686238 | orchestrator | Wednesday 28 May 2025 17:26:41 +0000 (0:00:04.743) 0:00:25.877 ********* 2025-05-28 17:29:05.686274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-28 17:29:05.686299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-28 17:29:05.686321 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-28 17:29:05.686333 | orchestrator | 2025-05-28 17:29:05.686344 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-28 17:29:05.686355 | orchestrator | Wednesday 28 May 2025 17:26:47 +0000 (0:00:05.948) 0:00:31.825 ********* 2025-05-28 17:29:05.686366 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:29:05.686377 | orchestrator | 2025-05-28 17:29:05.686394 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-05-28 17:29:05.686405 | orchestrator | Wednesday 28 May 2025 17:26:48 +0000 (0:00:00.542) 0:00:32.368 ********* 2025-05-28 17:29:05.686416 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:29:05.686427 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:29:05.686437 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:29:05.686448 | orchestrator | 2025-05-28 17:29:05.686459 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-05-28 17:29:05.686470 | orchestrator | Wednesday 28 May 2025 17:26:51 +0000 (0:00:03.259) 0:00:35.627 ********* 2025-05-28 17:29:05.686481 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-28 17:29:05.686492 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-28 17:29:05.686503 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-28 17:29:05.686513 | orchestrator | 2025-05-28 17:29:05.686524 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-05-28 17:29:05.686535 | orchestrator | Wednesday 28 May 2025 17:26:53 +0000 (0:00:01.853) 0:00:37.481 ********* 2025-05-28 17:29:05.686545 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-28 17:29:05.686563 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-28 17:29:05.686574 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-28 17:29:05.686585 | orchestrator | 2025-05-28 17:29:05.686596 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-05-28 17:29:05.686606 | orchestrator | Wednesday 28 May 2025 17:26:54 +0000 (0:00:01.237) 0:00:38.719 ********* 2025-05-28 17:29:05.686617 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:29:05.686628 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:29:05.686639 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:29:05.686650 | orchestrator | 2025-05-28 17:29:05.686660 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-05-28 17:29:05.686672 | orchestrator | Wednesday 28 May 2025 17:26:55 +0000 (0:00:00.712) 0:00:39.431 ********* 2025-05-28 17:29:05.686683 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:29:05.686693 | orchestrator | 2025-05-28 17:29:05.686709 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-05-28 17:29:05.686721 | orchestrator | Wednesday 28 May 2025 17:26:55 +0000 (0:00:00.118) 0:00:39.550 ********* 2025-05-28 17:29:05.686731 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:29:05.686742 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:29:05.686753 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:29:05.686764 | orchestrator | 2025-05-28 17:29:05.686774 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-28 17:29:05.686785 | orchestrator | Wednesday 28 May 2025 17:26:55 +0000 (0:00:00.235) 0:00:39.785 ********* 2025-05-28 17:29:05.686796 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:29:05.686807 | orchestrator | 2025-05-28 17:29:05.686817 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-05-28 17:29:05.686828 | orchestrator | Wednesday 28 May 2025 17:26:55 +0000 (0:00:00.545) 0:00:40.331 ********* 2025-05-28 17:29:05.686846 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-28 17:29:05.686860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-28 17:29:05.686885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-28 17:29:05.686897 | orchestrator | 2025-05-28 17:29:05.686908 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-05-28 17:29:05.686919 | orchestrator | Wednesday 28 May 2025 17:27:01 +0000 (0:00:05.165) 0:00:45.497 ********* 2025-05-28 17:29:05.686939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-28 17:29:05.686958 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:29:05.686975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-28 17:29:05.686987 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:29:05.687007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-28 17:29:05.687026 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:29:05.687037 | orchestrator | 2025-05-28 17:29:05.687047 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-05-28 17:29:05.687077 | orchestrator | Wednesday 28 May 2025 17:27:05 +0000 (0:00:03.907) 0:00:49.404 ********* 2025-05-28 17:29:05.687094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-28 17:29:05.687107 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:29:05.687126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-28 17:29:05.687144 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:29:05.687168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-28 17:29:05.687181 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:29:05.687192 | orchestrator | 2025-05-28 17:29:05.687203 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-05-28 17:29:05.687213 | orchestrator | Wednesday 28 May 2025 17:27:08 +0000 (0:00:03.392) 0:00:52.797 ********* 2025-05-28 17:29:05.687224 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:29:05.687235 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:29:05.687245 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:29:05.687256 | orchestrator | 2025-05-28 17:29:05.687267 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-05-28 17:29:05.687277 | orchestrator | Wednesday 28 May 2025 17:27:11 +0000 (0:00:03.503) 0:00:56.300 ********* 2025-05-28 17:29:05.687294 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-28 17:29:05.687313 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-28 17:29:05.687331 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-28 17:29:05.687352 | orchestrator | 2025-05-28 17:29:05.687363 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-05-28 17:29:05.687374 | orchestrator | Wednesday 28 May 2025 17:27:16 +0000 (0:00:04.394) 0:01:00.695 ********* 2025-05-28 17:29:05.687384 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:29:05.687395 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:29:05.687406 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:29:05.687417 | orchestrator | 2025-05-28 17:29:05.687427 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-05-28 17:29:05.687438 | orchestrator | Wednesday 28 May 2025 17:27:23 +0000 (0:00:07.515) 0:01:08.210 ********* 2025-05-28 17:29:05.687449 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:29:05.687460 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:29:05.687471 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:29:05.687481 | orchestrator | 2025-05-28 17:29:05.687492 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-05-28 17:29:05.687509 | orchestrator | Wednesday 28 May 2025 17:27:29 +0000 (0:00:06.128) 0:01:14.339 ********* 2025-05-28 17:29:05.687520 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:29:05.687531 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:29:05.687542 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:29:05.687552 | orchestrator | 2025-05-28 17:29:05.687563 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-05-28 17:29:05.687574 | orchestrator | Wednesday 28 May 2025 17:27:34 +0000 (0:00:04.501) 0:01:18.840 ********* 2025-05-28 17:29:05.687585 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:29:05.687596 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:29:05.687606 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:29:05.687617 | orchestrator | 2025-05-28 17:29:05.687628 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-05-28 17:29:05.687639 | orchestrator | Wednesday 28 May 2025 17:27:38 +0000 (0:00:03.845) 0:01:22.685 ********* 2025-05-28 17:29:05.687650 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:29:05.687660 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:29:05.687671 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:29:05.687681 | orchestrator | 2025-05-28 17:29:05.687692 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-05-28 17:29:05.687703 | orchestrator | Wednesday 28 May 2025 17:27:41 +0000 (0:00:03.400) 0:01:26.086 ********* 2025-05-28 17:29:05.687714 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:29:05.687725 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:29:05.687735 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:29:05.687746 | orchestrator | 2025-05-28 17:29:05.687757 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-05-28 17:29:05.687767 | orchestrator | Wednesday 28 May 2025 17:27:41 +0000 (0:00:00.246) 0:01:26.333 ********* 2025-05-28 17:29:05.687778 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-05-28 17:29:05.687789 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:29:05.687800 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-05-28 17:29:05.687811 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:29:05.687822 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-05-28 17:29:05.687832 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:29:05.687843 | orchestrator | 2025-05-28 17:29:05.687854 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-05-28 17:29:05.687864 | orchestrator | Wednesday 28 May 2025 17:27:44 +0000 (0:00:02.834) 0:01:29.168 ********* 2025-05-28 17:29:05.687881 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-28 17:29:05.687909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-28 17:29:05.687927 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-28 17:29:05.687947 | orchestrator | 2025-05-28 17:29:05.687958 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-28 17:29:05.687969 | orchestrator | Wednesday 28 May 2025 17:27:48 +0000 (0:00:03.281) 0:01:32.449 ********* 2025-05-28 17:29:05.687980 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:29:05.687990 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:29:05.688001 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:29:05.688012 | orchestrator | 2025-05-28 17:29:05.688023 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-05-28 17:29:05.688033 | orchestrator | Wednesday 28 May 2025 17:27:48 +0000 (0:00:00.200) 0:01:32.650 ********* 2025-05-28 17:29:05.688044 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:29:05.688145 | orchestrator | 2025-05-28 17:29:05.688159 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-05-28 17:29:05.688170 | orchestrator | Wednesday 28 May 2025 17:27:50 +0000 (0:00:02.043) 0:01:34.693 ********* 2025-05-28 17:29:05.688181 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:29:05.688192 | orchestrator | 2025-05-28 17:29:05.688203 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-05-28 17:29:05.688214 | orchestrator | Wednesday 28 May 2025 17:27:52 +0000 (0:00:02.058) 0:01:36.751 ********* 2025-05-28 17:29:05.688225 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:29:05.688236 | orchestrator | 2025-05-28 17:29:05.688248 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-05-28 17:29:05.688257 | orchestrator | Wednesday 28 May 2025 17:27:54 +0000 (0:00:01.856) 0:01:38.608 ********* 2025-05-28 17:29:05.688267 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:29:05.688277 | orchestrator | 2025-05-28 17:29:05.688286 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-05-28 17:29:05.688296 | orchestrator | Wednesday 28 May 2025 17:28:20 +0000 (0:00:25.746) 0:02:04.355 ********* 2025-05-28 17:29:05.688306 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:29:05.688316 | orchestrator | 2025-05-28 17:29:05.688331 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-05-28 17:29:05.688342 | orchestrator | Wednesday 28 May 2025 17:28:22 +0000 (0:00:02.383) 0:02:06.739 ********* 2025-05-28 17:29:05.688351 | orchestrator | 2025-05-28 17:29:05.688361 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-05-28 17:29:05.688371 | orchestrator | Wednesday 28 May 2025 17:28:22 +0000 (0:00:00.061) 0:02:06.800 ********* 2025-05-28 17:29:05.688380 | orchestrator | 2025-05-28 17:29:05.688390 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-05-28 17:29:05.688400 | orchestrator | Wednesday 28 May 2025 17:28:22 +0000 (0:00:00.061) 0:02:06.861 ********* 2025-05-28 17:29:05.688410 | orchestrator | 2025-05-28 17:29:05.688419 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-05-28 17:29:05.688429 | orchestrator | Wednesday 28 May 2025 17:28:22 +0000 (0:00:00.062) 0:02:06.924 ********* 2025-05-28 17:29:05.688439 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:29:05.688448 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:29:05.688458 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:29:05.688475 | orchestrator | 2025-05-28 17:29:05.688485 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 17:29:05.688520 | orchestrator | testbed-node-0 : ok=26  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-05-28 17:29:05.688532 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-05-28 17:29:05.688542 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-05-28 17:29:05.688552 | orchestrator | 2025-05-28 17:29:05.688561 | orchestrator | 2025-05-28 17:29:05.688571 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 17:29:05.688581 | orchestrator | Wednesday 28 May 2025 17:29:03 +0000 (0:00:41.287) 0:02:48.211 ********* 2025-05-28 17:29:05.688591 | orchestrator | =============================================================================== 2025-05-28 17:29:05.688600 | orchestrator | glance : Restart glance-api container ---------------------------------- 41.29s 2025-05-28 17:29:05.688610 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 25.75s 2025-05-28 17:29:05.688625 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 7.52s 2025-05-28 17:29:05.688635 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 6.13s 2025-05-28 17:29:05.688644 | orchestrator | glance : Ensuring config directories exist ------------------------------ 5.95s 2025-05-28 17:29:05.688654 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 5.82s 2025-05-28 17:29:05.688664 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 5.17s 2025-05-28 17:29:05.688673 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.74s 2025-05-28 17:29:05.688683 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 4.50s 2025-05-28 17:29:05.688692 | orchestrator | glance : Copying over config.json files for services -------------------- 4.39s 2025-05-28 17:29:05.688702 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.91s 2025-05-28 17:29:05.688711 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 3.91s 2025-05-28 17:29:05.688721 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 3.85s 2025-05-28 17:29:05.688731 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.50s 2025-05-28 17:29:05.688740 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 3.40s 2025-05-28 17:29:05.688750 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.39s 2025-05-28 17:29:05.688759 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.30s 2025-05-28 17:29:05.688769 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.29s 2025-05-28 17:29:05.688778 | orchestrator | glance : Check glance containers ---------------------------------------- 3.28s 2025-05-28 17:29:05.688788 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.26s 2025-05-28 17:29:05.688798 | orchestrator | 2025-05-28 17:29:05 | INFO  | Task 4dcc401f-b13d-4ebe-8122-b338e30a94ae is in state SUCCESS 2025-05-28 17:29:05.688807 | orchestrator | 2025-05-28 17:29:05 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:29:08.738285 | orchestrator | 2025-05-28 17:29:08 | INFO  | Task ffc1a8d8-c459-47a4-8999-43321493f5ee is in state STARTED 2025-05-28 17:29:08.738545 | orchestrator | 2025-05-28 17:29:08 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:29:08.738585 | orchestrator | 2025-05-28 17:29:08 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:29:08.739413 | orchestrator | 2025-05-28 17:29:08 | INFO  | Task 7432d1e9-9d72-44ec-b255-622be1d0ea02 is in state STARTED 2025-05-28 17:29:08.739490 | orchestrator | 2025-05-28 17:29:08 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:29:11.776734 | orchestrator | 2025-05-28 17:29:11 | INFO  | Task ffc1a8d8-c459-47a4-8999-43321493f5ee is in state STARTED 2025-05-28 17:29:11.777507 | orchestrator | 2025-05-28 17:29:11 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:29:11.780304 | orchestrator | 2025-05-28 17:29:11 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:29:11.782196 | orchestrator | 2025-05-28 17:29:11 | INFO  | Task 7432d1e9-9d72-44ec-b255-622be1d0ea02 is in state STARTED 2025-05-28 17:29:11.782250 | orchestrator | 2025-05-28 17:29:11 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:29:14.828795 | orchestrator | 2025-05-28 17:29:14 | INFO  | Task ffc1a8d8-c459-47a4-8999-43321493f5ee is in state STARTED 2025-05-28 17:29:14.829985 | orchestrator | 2025-05-28 17:29:14 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:29:14.833181 | orchestrator | 2025-05-28 17:29:14 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:29:14.835956 | orchestrator | 2025-05-28 17:29:14 | INFO  | Task 7432d1e9-9d72-44ec-b255-622be1d0ea02 is in state STARTED 2025-05-28 17:29:14.835973 | orchestrator | 2025-05-28 17:29:14 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:29:17.886200 | orchestrator | 2025-05-28 17:29:17 | INFO  | Task ffc1a8d8-c459-47a4-8999-43321493f5ee is in state STARTED 2025-05-28 17:29:17.887608 | orchestrator | 2025-05-28 17:29:17 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:29:17.890353 | orchestrator | 2025-05-28 17:29:17 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:29:17.891883 | orchestrator | 2025-05-28 17:29:17 | INFO  | Task 7432d1e9-9d72-44ec-b255-622be1d0ea02 is in state STARTED 2025-05-28 17:29:17.891905 | orchestrator | 2025-05-28 17:29:17 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:29:20.937592 | orchestrator | 2025-05-28 17:29:20 | INFO  | Task ffc1a8d8-c459-47a4-8999-43321493f5ee is in state STARTED 2025-05-28 17:29:20.939591 | orchestrator | 2025-05-28 17:29:20 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:29:20.940687 | orchestrator | 2025-05-28 17:29:20 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:29:20.944428 | orchestrator | 2025-05-28 17:29:20 | INFO  | Task 7432d1e9-9d72-44ec-b255-622be1d0ea02 is in state STARTED 2025-05-28 17:29:20.944466 | orchestrator | 2025-05-28 17:29:20 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:29:23.994465 | orchestrator | 2025-05-28 17:29:23 | INFO  | Task ffc1a8d8-c459-47a4-8999-43321493f5ee is in state STARTED 2025-05-28 17:29:23.994791 | orchestrator | 2025-05-28 17:29:23 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:29:23.995802 | orchestrator | 2025-05-28 17:29:23 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:29:24.000720 | orchestrator | 2025-05-28 17:29:23 | INFO  | Task 7432d1e9-9d72-44ec-b255-622be1d0ea02 is in state STARTED 2025-05-28 17:29:24.000761 | orchestrator | 2025-05-28 17:29:23 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:29:27.050439 | orchestrator | 2025-05-28 17:29:27 | INFO  | Task ffc1a8d8-c459-47a4-8999-43321493f5ee is in state STARTED 2025-05-28 17:29:27.051662 | orchestrator | 2025-05-28 17:29:27 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:29:27.053724 | orchestrator | 2025-05-28 17:29:27 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:29:27.054897 | orchestrator | 2025-05-28 17:29:27 | INFO  | Task 7432d1e9-9d72-44ec-b255-622be1d0ea02 is in state STARTED 2025-05-28 17:29:27.055119 | orchestrator | 2025-05-28 17:29:27 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:29:30.111816 | orchestrator | 2025-05-28 17:29:30 | INFO  | Task ffc1a8d8-c459-47a4-8999-43321493f5ee is in state STARTED 2025-05-28 17:29:30.111934 | orchestrator | 2025-05-28 17:29:30 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:29:30.115927 | orchestrator | 2025-05-28 17:29:30 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:29:30.118789 | orchestrator | 2025-05-28 17:29:30 | INFO  | Task 7432d1e9-9d72-44ec-b255-622be1d0ea02 is in state STARTED 2025-05-28 17:29:30.119063 | orchestrator | 2025-05-28 17:29:30 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:29:33.166600 | orchestrator | 2025-05-28 17:29:33 | INFO  | Task ffc1a8d8-c459-47a4-8999-43321493f5ee is in state STARTED 2025-05-28 17:29:33.167003 | orchestrator | 2025-05-28 17:29:33 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:29:33.168982 | orchestrator | 2025-05-28 17:29:33 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:29:33.170671 | orchestrator | 2025-05-28 17:29:33 | INFO  | Task 7432d1e9-9d72-44ec-b255-622be1d0ea02 is in state STARTED 2025-05-28 17:29:33.170696 | orchestrator | 2025-05-28 17:29:33 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:29:36.222735 | orchestrator | 2025-05-28 17:29:36 | INFO  | Task ffc1a8d8-c459-47a4-8999-43321493f5ee is in state STARTED 2025-05-28 17:29:36.225787 | orchestrator | 2025-05-28 17:29:36 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:29:36.226288 | orchestrator | 2025-05-28 17:29:36 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:29:36.232219 | orchestrator | 2025-05-28 17:29:36 | INFO  | Task 7432d1e9-9d72-44ec-b255-622be1d0ea02 is in state SUCCESS 2025-05-28 17:29:36.232266 | orchestrator | 2025-05-28 17:29:36.232281 | orchestrator | None 2025-05-28 17:29:36.233790 | orchestrator | 2025-05-28 17:29:36.233822 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-28 17:29:36.233894 | orchestrator | 2025-05-28 17:29:36.233906 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-28 17:29:36.233918 | orchestrator | Wednesday 28 May 2025 17:26:08 +0000 (0:00:00.267) 0:00:00.267 ********* 2025-05-28 17:29:36.233929 | orchestrator | ok: [testbed-manager] 2025-05-28 17:29:36.233941 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:29:36.233952 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:29:36.234014 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:29:36.234248 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:29:36.234262 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:29:36.234272 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:29:36.234283 | orchestrator | 2025-05-28 17:29:36.234294 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-28 17:29:36.234305 | orchestrator | Wednesday 28 May 2025 17:26:09 +0000 (0:00:00.860) 0:00:01.128 ********* 2025-05-28 17:29:36.234317 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-05-28 17:29:36.234328 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-05-28 17:29:36.234338 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-05-28 17:29:36.234349 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-05-28 17:29:36.234374 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-05-28 17:29:36.234386 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-05-28 17:29:36.234417 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-05-28 17:29:36.234428 | orchestrator | 2025-05-28 17:29:36.234439 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-05-28 17:29:36.234450 | orchestrator | 2025-05-28 17:29:36.234460 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-05-28 17:29:36.234471 | orchestrator | Wednesday 28 May 2025 17:26:10 +0000 (0:00:00.732) 0:00:01.860 ********* 2025-05-28 17:29:36.234482 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 17:29:36.234494 | orchestrator | 2025-05-28 17:29:36.234506 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-05-28 17:29:36.234516 | orchestrator | Wednesday 28 May 2025 17:26:12 +0000 (0:00:01.838) 0:00:03.698 ********* 2025-05-28 17:29:36.234530 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-28 17:29:36.234545 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-28 17:29:36.234558 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-28 17:29:36.234569 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-28 17:29:36.234594 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-28 17:29:36.234606 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-28 17:29:36.234631 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:29:36.234661 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:29:36.234675 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-28 17:29:36.234687 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-28 17:29:36.234699 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-28 17:29:36.234710 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:29:36.234727 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-28 17:29:36.234746 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:29:36.234764 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-28 17:29:36.234779 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:29:36.234790 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-28 17:29:36.234801 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:29:36.234813 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-28 17:29:36.234831 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:29:36.234849 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-28 17:29:36.234866 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-28 17:29:36.234878 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-28 17:29:36.234889 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-28 17:29:36.234900 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-28 17:29:36.234911 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-28 17:29:36.234922 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:29:36.234959 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:29:36.234977 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:29:36.235088 | orchestrator | 2025-05-28 17:29:36.235149 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-05-28 17:29:36.235161 | orchestrator | Wednesday 28 May 2025 17:26:15 +0000 (0:00:03.560) 0:00:07.259 ********* 2025-05-28 17:29:36.235173 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 17:29:36.235184 | orchestrator | 2025-05-28 17:29:36.235195 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-05-28 17:29:36.235206 | orchestrator | Wednesday 28 May 2025 17:26:17 +0000 (0:00:01.810) 0:00:09.069 ********* 2025-05-28 17:29:36.235217 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-28 17:29:36.235229 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-28 17:29:36.235240 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-28 17:29:36.235251 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-28 17:29:36.235277 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-28 17:29:36.235289 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-28 17:29:36.235305 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-28 17:29:36.235316 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-28 17:29:36.235328 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:29:36.235339 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:29:36.235350 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:29:36.235362 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-28 17:29:36.235384 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-28 17:29:36.235396 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-28 17:29:36.235412 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-28 17:29:36.235424 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:29:36.235435 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:29:36.235446 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:29:36.235456 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-28 17:29:36.235524 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-28 17:29:36.235539 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-28 17:29:36.235556 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-28 17:29:36.235567 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-28 17:29:36.235578 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-28 17:29:36.235589 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-28 17:29:36.235600 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:29:36.235618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:29:36.237852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:29:36.237887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:29:36.237898 | orchestrator | 2025-05-28 17:29:36.237910 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-05-28 17:29:36.237921 | orchestrator | Wednesday 28 May 2025 17:26:23 +0000 (0:00:05.967) 0:00:15.036 ********* 2025-05-28 17:29:36.237944 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-28 17:29:36.237957 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-28 17:29:36.237968 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-28 17:29:36.237993 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-28 17:29:36.238067 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:29:36.238083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-28 17:29:36.238124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:29:36.238137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:29:36.238149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-28 17:29:36.238160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:29:36.238184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-28 17:29:36.238196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:29:36.238222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:29:36.238234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-28 17:29:36.238250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:29:36.238262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-28 17:29:36.238273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:29:36.238284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:29:36.238302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-28 17:29:36.238313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:29:36.238324 | orchestrator | skipping: [testbed-manager] 2025-05-28 17:29:36.238336 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:29:36.238347 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:29:36.238358 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:29:36.238375 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-28 17:29:36.238387 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-28 17:29:36.238403 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-28 17:29:36.238414 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:29:36.238426 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-28 17:29:36.238437 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-28 17:29:36.238457 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-28 17:29:36.238470 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:29:36.238482 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-28 17:29:36.238495 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-28 17:29:36.238538 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-28 17:29:36.238552 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:29:36.238564 | orchestrator | 2025-05-28 17:29:36.238575 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-05-28 17:29:36.238588 | orchestrator | Wednesday 28 May 2025 17:26:25 +0000 (0:00:01.730) 0:00:16.767 ********* 2025-05-28 17:29:36.238606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-28 17:29:36.238619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:29:36.238639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:29:36.238653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-28 17:29:36.238665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:29:36.238678 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-28 17:29:36.238698 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-28 17:29:36.238716 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-28 17:29:36.238729 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-28 17:29:36.238752 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:29:36.238764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-28 17:29:36.238777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:29:36.238790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:29:36.238808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-28 17:29:36.238819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:29:36.238839 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:29:36.238851 | orchestrator | skipping: [testbed-manager] 2025-05-28 17:29:36.238862 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:29:36.238873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-28 17:29:36.238890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:29:36.238901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:29:36.238912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-28 17:29:36.238923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 17:29:36.238940 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-28 17:29:36.238951 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:29:36.238963 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-28 17:29:36.238978 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-28 17:29:36.238995 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:29:36.239007 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-28 17:29:36.239018 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-28 17:29:36.239029 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-28 17:29:36.239067 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:29:36.239079 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-28 17:29:36.239090 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-28 17:29:36.239108 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-28 17:29:36.239119 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:29:36.239130 | orchestrator | 2025-05-28 17:29:36.239141 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-05-28 17:29:36.239152 | orchestrator | Wednesday 28 May 2025 17:26:27 +0000 (0:00:01.899) 0:00:18.667 ********* 2025-05-28 17:29:36.239168 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-28 17:29:36.239187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-28 17:29:36.239198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-28 17:29:36.239209 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-28 17:29:36.239220 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-28 17:29:36.239232 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-28 17:29:36.239248 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-28 17:29:36.239268 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-28 17:29:36.239303 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:29:36.239323 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:29:36.239341 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:29:36.239360 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-28 17:29:36.239380 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-28 17:29:36.239401 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-28 17:29:36.239432 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-28 17:29:36.239461 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:29:36.239478 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:29:36.239490 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:29:36.239502 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-28 17:29:36.239513 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-28 17:29:36.239525 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-28 17:29:36.239542 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-28 17:29:36.239560 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-28 17:29:36.239576 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-28 17:29:36.239588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-28 17:29:36.239599 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:29:36.239610 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:29:36.239621 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:29:36.239632 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:29:36.239650 | orchestrator | 2025-05-28 17:29:36.239661 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-05-28 17:29:36.239672 | orchestrator | Wednesday 28 May 2025 17:26:32 +0000 (0:00:05.596) 0:00:24.264 ********* 2025-05-28 17:29:36.239683 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-28 17:29:36.239694 | orchestrator | 2025-05-28 17:29:36.239705 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-05-28 17:29:36.239721 | orchestrator | Wednesday 28 May 2025 17:26:33 +0000 (0:00:01.205) 0:00:25.470 ********* 2025-05-28 17:29:36.239733 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1079629, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0154293, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.239749 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1079629, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0154293, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.239761 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1079629, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0154293, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.239772 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1079619, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0134292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.239783 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1079619, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0134292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.239794 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1079629, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0154293, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-28 17:29:36.239816 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1079629, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0154293, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.239828 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1079619, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0134292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.239848 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1079605, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0094292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.239859 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1079629, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0154293, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.239870 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1079629, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0154293, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.239881 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1079605, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0094292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.239892 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1079606, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0094292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.239915 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1079605, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0094292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.239926 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1079606, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0094292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.239941 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1079619, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0134292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.239953 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1079619, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0134292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.239964 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1079619, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0134292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.239975 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1079617, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0134292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.239986 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1079617, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0134292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.240003 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1079605, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0094292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.240020 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1079606, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0094292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.240036 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1079609, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0104291, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.240104 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1079619, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0134292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-28 17:29:36.240117 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1079605, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0094292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.240128 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1079609, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0104291, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.240146 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1079606, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0094292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.240157 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1079605, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0094292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.240175 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1079613, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0114293, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.240192 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1079606, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0094292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.240204 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1079613, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0114293, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.240215 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1079617, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0134292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.240226 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1079620, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0144293, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.240243 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1079617, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0134292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.240255 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1079606, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0094292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.240272 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1079609, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0104291, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.240284 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1079617, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0134292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.240295 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1079620, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0144293, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.240306 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1079609, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0104291, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.240317 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1079625, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0154293, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.240365 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1079605, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0094292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-28 17:29:36.240377 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1079613, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0114293, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.240510 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1079625, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0154293, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.240539 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1079617, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0134292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.240558 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1079609, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0104291, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.240576 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1079613, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0114293, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.240592 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1079640, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0174294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.240619 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1079640, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0174294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.240636 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1079609, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0104291, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.240682 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1079620, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0144293, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.240699 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1079621, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0144293, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.240709 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1079613, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0114293, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.240719 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1079620, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0144293, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.240735 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1079608, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0094292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.240745 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1079621, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0144293, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.240755 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1079613, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0114293, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.240793 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1079606, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0094292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-28 17:29:36.240809 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1079625, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0154293, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.240819 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1079620, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0144293, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.240829 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1079611, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0104291, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.240848 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1079625, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0154293, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.240858 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1079620, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0144293, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.240868 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1079608, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0094292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.240905 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1079625, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0154293, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.240920 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1079625, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0154293, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.240931 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1079604, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0084293, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.240940 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1079640, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0174294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.240957 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1079640, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0174294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.240967 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1079640, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0174294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.240977 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1079618, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0134292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.241017 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1079640, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0174294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.241028 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1079611, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0104291, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.241077 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1079621, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0144293, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.241096 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1079617, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0134292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-28 17:29:36.241106 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1079621, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0144293, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.241116 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1079621, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0144293, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.241126 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1079621, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0144293, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.241141 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1079604, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0084293, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.241150 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1079638, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0174294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.241164 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1079608, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0094292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.241180 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1079608, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0094292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.241190 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1079608, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0094292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.241200 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1079611, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0104291, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.241210 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1079608, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0094292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.241226 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1079610, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0104291, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.241236 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1079611, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0104291, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.241250 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1079609, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0104291, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-28 17:29:36.241266 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1079618, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0134292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.241276 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1079604, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0084293, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.241286 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1079611, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0104291, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.241296 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1079618, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0134292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.241310 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1079604, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0084293, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.241321 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1079630, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0164292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.241330 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:29:36.241351 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1079638, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0174294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.241361 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1079611, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0104291, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.241371 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1079638, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0174294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.241381 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1079604, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0084293, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.241391 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1079618, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0134292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.241405 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1079610, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0104291, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.241416 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1079610, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0104291, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.241436 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1079618, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0134292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.241446 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1079604, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0084293, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.241456 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1079630, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0164292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.241466 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1079613, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0114293, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-28 17:29:36.241476 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:29:36.241486 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1079638, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0174294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.241500 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1079618, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0134292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.241510 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1079638, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0174294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.241530 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1079630, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0164292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.241540 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:29:36.241550 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1079610, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0104291, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.241561 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1079610, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0104291, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.241571 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1079638, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0174294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.241580 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1079620, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0144293, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-28 17:29:36.241595 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1079630, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0164292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.241613 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:29:36.241623 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1079630, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0164292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.241637 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:29:36.241647 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1079610, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0104291, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.241657 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1079630, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0164292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 17:29:36.241666 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:29:36.241676 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1079625, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0154293, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-28 17:29:36.241686 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1079640, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0174294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-28 17:29:36.241696 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1079621, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0144293, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-28 17:29:36.241711 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1079608, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0094292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-28 17:29:36.241731 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1079611, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0104291, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-28 17:29:36.241741 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1079604, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0084293, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-28 17:29:36.241751 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1079618, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0134292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-28 17:29:36.241761 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1079638, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0174294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-28 17:29:36.241771 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1079610, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0104291, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-28 17:29:36.241780 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1079630, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0164292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-28 17:29:36.241796 | orchestrator | 2025-05-28 17:29:36.241805 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-05-28 17:29:36.241815 | orchestrator | Wednesday 28 May 2025 17:26:57 +0000 (0:00:23.444) 0:00:48.915 ********* 2025-05-28 17:29:36.241829 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-28 17:29:36.241839 | orchestrator | 2025-05-28 17:29:36.241848 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-05-28 17:29:36.241858 | orchestrator | Wednesday 28 May 2025 17:26:58 +0000 (0:00:01.431) 0:00:50.346 ********* 2025-05-28 17:29:36.241868 | orchestrator | [WARNING]: Skipped 2025-05-28 17:29:36.241878 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-28 17:29:36.241887 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-05-28 17:29:36.241897 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-28 17:29:36.241906 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-05-28 17:29:36.241916 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-28 17:29:36.241925 | orchestrator | [WARNING]: Skipped 2025-05-28 17:29:36.241935 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-28 17:29:36.241944 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-05-28 17:29:36.241954 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-28 17:29:36.241963 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-05-28 17:29:36.241973 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-28 17:29:36.241982 | orchestrator | [WARNING]: Skipped 2025-05-28 17:29:36.241995 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-28 17:29:36.242005 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-05-28 17:29:36.242064 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-28 17:29:36.242078 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-05-28 17:29:36.242088 | orchestrator | [WARNING]: Skipped 2025-05-28 17:29:36.242097 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-28 17:29:36.242106 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-05-28 17:29:36.242116 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-28 17:29:36.242125 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-05-28 17:29:36.242135 | orchestrator | [WARNING]: Skipped 2025-05-28 17:29:36.242145 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-28 17:29:36.242154 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-05-28 17:29:36.242164 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-28 17:29:36.242173 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-05-28 17:29:36.242183 | orchestrator | [WARNING]: Skipped 2025-05-28 17:29:36.242192 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-28 17:29:36.242202 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-05-28 17:29:36.242211 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-28 17:29:36.242221 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-05-28 17:29:36.242231 | orchestrator | [WARNING]: Skipped 2025-05-28 17:29:36.242241 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-28 17:29:36.242250 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-05-28 17:29:36.242260 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-28 17:29:36.242269 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-05-28 17:29:36.242279 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-28 17:29:36.242295 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-28 17:29:36.242305 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-28 17:29:36.242314 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-28 17:29:36.242324 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-28 17:29:36.242333 | orchestrator | 2025-05-28 17:29:36.242343 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-05-28 17:29:36.242352 | orchestrator | Wednesday 28 May 2025 17:27:00 +0000 (0:00:02.163) 0:00:52.509 ********* 2025-05-28 17:29:36.242362 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-28 17:29:36.242372 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:29:36.242382 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-28 17:29:36.242392 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:29:36.242401 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-28 17:29:36.242411 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:29:36.242420 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-28 17:29:36.242430 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:29:36.242440 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-28 17:29:36.242449 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:29:36.242459 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-28 17:29:36.242469 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:29:36.242478 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-05-28 17:29:36.242488 | orchestrator | 2025-05-28 17:29:36.242497 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-05-28 17:29:36.242507 | orchestrator | Wednesday 28 May 2025 17:27:18 +0000 (0:00:17.131) 0:01:09.641 ********* 2025-05-28 17:29:36.242522 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-28 17:29:36.242532 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:29:36.242542 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-28 17:29:36.242552 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:29:36.242561 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-28 17:29:36.242571 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:29:36.242580 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-28 17:29:36.242590 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:29:36.242599 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-28 17:29:36.242609 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:29:36.242618 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-28 17:29:36.242628 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:29:36.242637 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-05-28 17:29:36.242647 | orchestrator | 2025-05-28 17:29:36.242660 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-05-28 17:29:36.242670 | orchestrator | Wednesday 28 May 2025 17:27:21 +0000 (0:00:03.858) 0:01:13.500 ********* 2025-05-28 17:29:36.242680 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-28 17:29:36.242691 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-28 17:29:36.242701 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-28 17:29:36.242716 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:29:36.242726 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:29:36.242735 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:29:36.242745 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-05-28 17:29:36.242755 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-28 17:29:36.242764 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:29:36.242774 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-28 17:29:36.242784 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:29:36.242794 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-28 17:29:36.242803 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:29:36.242813 | orchestrator | 2025-05-28 17:29:36.242823 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-05-28 17:29:36.242832 | orchestrator | Wednesday 28 May 2025 17:27:24 +0000 (0:00:02.827) 0:01:16.327 ********* 2025-05-28 17:29:36.242842 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-28 17:29:36.242851 | orchestrator | 2025-05-28 17:29:36.242861 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-05-28 17:29:36.242870 | orchestrator | Wednesday 28 May 2025 17:27:26 +0000 (0:00:01.366) 0:01:17.693 ********* 2025-05-28 17:29:36.242880 | orchestrator | skipping: [testbed-manager] 2025-05-28 17:29:36.242890 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:29:36.242899 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:29:36.242908 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:29:36.242918 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:29:36.242927 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:29:36.242937 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:29:36.242946 | orchestrator | 2025-05-28 17:29:36.242956 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-05-28 17:29:36.242965 | orchestrator | Wednesday 28 May 2025 17:27:27 +0000 (0:00:00.907) 0:01:18.601 ********* 2025-05-28 17:29:36.242975 | orchestrator | skipping: [testbed-manager] 2025-05-28 17:29:36.242984 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:29:36.242994 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:29:36.243003 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:29:36.243013 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:29:36.243022 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:29:36.243031 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:29:36.243082 | orchestrator | 2025-05-28 17:29:36.243093 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-05-28 17:29:36.243103 | orchestrator | Wednesday 28 May 2025 17:27:30 +0000 (0:00:03.512) 0:01:22.113 ********* 2025-05-28 17:29:36.243112 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-28 17:29:36.243122 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:29:36.243131 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-28 17:29:36.243141 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-28 17:29:36.243151 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-28 17:29:36.243160 | orchestrator | skipping: [testbed-manager] 2025-05-28 17:29:36.243170 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:29:36.243179 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:29:36.243194 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-28 17:29:36.243207 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:29:36.243216 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-28 17:29:36.243223 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:29:36.243231 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-28 17:29:36.243239 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:29:36.243247 | orchestrator | 2025-05-28 17:29:36.243255 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-05-28 17:29:36.243262 | orchestrator | Wednesday 28 May 2025 17:27:33 +0000 (0:00:02.500) 0:01:24.613 ********* 2025-05-28 17:29:36.243270 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-28 17:29:36.243278 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:29:36.243286 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-28 17:29:36.243294 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:29:36.243306 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-05-28 17:29:36.243314 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-28 17:29:36.243322 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:29:36.243330 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-28 17:29:36.243338 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:29:36.243346 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-28 17:29:36.243353 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:29:36.243361 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-28 17:29:36.243369 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:29:36.243377 | orchestrator | 2025-05-28 17:29:36.243384 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-05-28 17:29:36.243392 | orchestrator | Wednesday 28 May 2025 17:27:34 +0000 (0:00:01.485) 0:01:26.098 ********* 2025-05-28 17:29:36.243400 | orchestrator | [WARNING]: Skipped 2025-05-28 17:29:36.243408 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-05-28 17:29:36.243416 | orchestrator | due to this access issue: 2025-05-28 17:29:36.243423 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-05-28 17:29:36.243431 | orchestrator | not a directory 2025-05-28 17:29:36.243439 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-28 17:29:36.243447 | orchestrator | 2025-05-28 17:29:36.243454 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-05-28 17:29:36.243462 | orchestrator | Wednesday 28 May 2025 17:27:35 +0000 (0:00:01.165) 0:01:27.264 ********* 2025-05-28 17:29:36.243470 | orchestrator | skipping: [testbed-manager] 2025-05-28 17:29:36.243478 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:29:36.243486 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:29:36.243493 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:29:36.243501 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:29:36.243509 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:29:36.243517 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:29:36.243524 | orchestrator | 2025-05-28 17:29:36.243532 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-05-28 17:29:36.243540 | orchestrator | Wednesday 28 May 2025 17:27:37 +0000 (0:00:01.361) 0:01:28.626 ********* 2025-05-28 17:29:36.243548 | orchestrator | skipping: [testbed-manager] 2025-05-28 17:29:36.243556 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:29:36.243563 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:29:36.243577 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:29:36.243585 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:29:36.243593 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:29:36.243600 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:29:36.243608 | orchestrator | 2025-05-28 17:29:36.243616 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-05-28 17:29:36.243624 | orchestrator | Wednesday 28 May 2025 17:27:38 +0000 (0:00:00.937) 0:01:29.563 ********* 2025-05-28 17:29:36.243632 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-28 17:29:36.243645 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-28 17:29:36.243654 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-28 17:29:36.243667 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-28 17:29:36.243675 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-28 17:29:36.243684 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-28 17:29:36.243692 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-28 17:29:36.243705 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-28 17:29:36.243713 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-28 17:29:36.243725 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:29:36.243734 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-28 17:29:36.243746 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:29:36.243755 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:29:36.243763 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-28 17:29:36.243779 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-28 17:29:36.243788 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-28 17:29:36.243801 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-28 17:29:36.243810 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:29:36.243821 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:29:36.243830 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-28 17:29:36.243838 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:29:36.243850 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:29:36.243858 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-28 17:29:36.243867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-28 17:29:36.243879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-28 17:29:36.243887 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-28 17:29:36.243899 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:29:36.243907 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:29:36.243919 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 17:29:36.243928 | orchestrator | 2025-05-28 17:29:36.243935 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-05-28 17:29:36.243943 | orchestrator | Wednesday 28 May 2025 17:27:42 +0000 (0:00:04.545) 0:01:34.109 ********* 2025-05-28 17:29:36.243951 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-05-28 17:29:36.243959 | orchestrator | skipping: [testbed-manager] 2025-05-28 17:29:36.243967 | orchestrator | 2025-05-28 17:29:36.243975 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-28 17:29:36.243983 | orchestrator | Wednesday 28 May 2025 17:27:43 +0000 (0:00:00.995) 0:01:35.104 ********* 2025-05-28 17:29:36.243991 | orchestrator | 2025-05-28 17:29:36.243999 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-28 17:29:36.244007 | orchestrator | Wednesday 28 May 2025 17:27:43 +0000 (0:00:00.063) 0:01:35.167 ********* 2025-05-28 17:29:36.244014 | orchestrator | 2025-05-28 17:29:36.244022 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-28 17:29:36.244030 | orchestrator | Wednesday 28 May 2025 17:27:43 +0000 (0:00:00.071) 0:01:35.238 ********* 2025-05-28 17:29:36.244038 | orchestrator | 2025-05-28 17:29:36.244058 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-28 17:29:36.244066 | orchestrator | Wednesday 28 May 2025 17:27:43 +0000 (0:00:00.184) 0:01:35.423 ********* 2025-05-28 17:29:36.244074 | orchestrator | 2025-05-28 17:29:36.244082 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-28 17:29:36.244090 | orchestrator | Wednesday 28 May 2025 17:27:43 +0000 (0:00:00.060) 0:01:35.483 ********* 2025-05-28 17:29:36.244098 | orchestrator | 2025-05-28 17:29:36.244105 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-28 17:29:36.244113 | orchestrator | Wednesday 28 May 2025 17:27:44 +0000 (0:00:00.058) 0:01:35.541 ********* 2025-05-28 17:29:36.244121 | orchestrator | 2025-05-28 17:29:36.244128 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-28 17:29:36.244136 | orchestrator | Wednesday 28 May 2025 17:27:44 +0000 (0:00:00.062) 0:01:35.603 ********* 2025-05-28 17:29:36.244144 | orchestrator | 2025-05-28 17:29:36.244152 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-05-28 17:29:36.244159 | orchestrator | Wednesday 28 May 2025 17:27:44 +0000 (0:00:00.083) 0:01:35.686 ********* 2025-05-28 17:29:36.244167 | orchestrator | changed: [testbed-manager] 2025-05-28 17:29:36.244175 | orchestrator | 2025-05-28 17:29:36.244183 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-05-28 17:29:36.244194 | orchestrator | Wednesday 28 May 2025 17:28:03 +0000 (0:00:18.879) 0:01:54.566 ********* 2025-05-28 17:29:36.244202 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:29:36.244210 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:29:36.244218 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:29:36.244226 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:29:36.244234 | orchestrator | changed: [testbed-manager] 2025-05-28 17:29:36.244241 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:29:36.244249 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:29:36.244257 | orchestrator | 2025-05-28 17:29:36.244265 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-05-28 17:29:36.244272 | orchestrator | Wednesday 28 May 2025 17:28:17 +0000 (0:00:14.058) 0:02:08.624 ********* 2025-05-28 17:29:36.244285 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:29:36.244293 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:29:36.244300 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:29:36.244308 | orchestrator | 2025-05-28 17:29:36.244316 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-05-28 17:29:36.244324 | orchestrator | Wednesday 28 May 2025 17:28:27 +0000 (0:00:10.228) 0:02:18.853 ********* 2025-05-28 17:29:36.244331 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:29:36.244339 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:29:36.244347 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:29:36.244354 | orchestrator | 2025-05-28 17:29:36.244366 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-05-28 17:29:36.244374 | orchestrator | Wednesday 28 May 2025 17:28:37 +0000 (0:00:10.429) 0:02:29.282 ********* 2025-05-28 17:29:36.244382 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:29:36.244390 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:29:36.244397 | orchestrator | changed: [testbed-manager] 2025-05-28 17:29:36.244405 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:29:36.244413 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:29:36.244420 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:29:36.244428 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:29:36.244436 | orchestrator | 2025-05-28 17:29:36.244443 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-05-28 17:29:36.244451 | orchestrator | Wednesday 28 May 2025 17:28:57 +0000 (0:00:19.340) 0:02:48.622 ********* 2025-05-28 17:29:36.244459 | orchestrator | changed: [testbed-manager] 2025-05-28 17:29:36.244467 | orchestrator | 2025-05-28 17:29:36.244475 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-05-28 17:29:36.244482 | orchestrator | Wednesday 28 May 2025 17:29:09 +0000 (0:00:12.724) 0:03:01.347 ********* 2025-05-28 17:29:36.244490 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:29:36.244498 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:29:36.244506 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:29:36.244513 | orchestrator | 2025-05-28 17:29:36.244521 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-05-28 17:29:36.244529 | orchestrator | Wednesday 28 May 2025 17:29:19 +0000 (0:00:09.637) 0:03:10.985 ********* 2025-05-28 17:29:36.244537 | orchestrator | changed: [testbed-manager] 2025-05-28 17:29:36.244544 | orchestrator | 2025-05-28 17:29:36.244552 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-05-28 17:29:36.244560 | orchestrator | Wednesday 28 May 2025 17:29:29 +0000 (0:00:10.143) 0:03:21.129 ********* 2025-05-28 17:29:36.244568 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:29:36.244575 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:29:36.244583 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:29:36.244591 | orchestrator | 2025-05-28 17:29:36.244598 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 17:29:36.244606 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-28 17:29:36.244615 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-05-28 17:29:36.244623 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-05-28 17:29:36.244631 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-05-28 17:29:36.244639 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-05-28 17:29:36.244647 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-05-28 17:29:36.244659 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-05-28 17:29:36.244667 | orchestrator | 2025-05-28 17:29:36.244675 | orchestrator | 2025-05-28 17:29:36.244683 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 17:29:36.244691 | orchestrator | Wednesday 28 May 2025 17:29:35 +0000 (0:00:06.149) 0:03:27.278 ********* 2025-05-28 17:29:36.244699 | orchestrator | =============================================================================== 2025-05-28 17:29:36.244706 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 23.44s 2025-05-28 17:29:36.244714 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 19.34s 2025-05-28 17:29:36.244722 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 18.88s 2025-05-28 17:29:36.244730 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 17.13s 2025-05-28 17:29:36.244741 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 14.06s 2025-05-28 17:29:36.244749 | orchestrator | prometheus : Restart prometheus-alertmanager container ----------------- 12.72s 2025-05-28 17:29:36.244757 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 10.43s 2025-05-28 17:29:36.244765 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 10.23s 2025-05-28 17:29:36.244772 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------ 10.14s 2025-05-28 17:29:36.244780 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 9.64s 2025-05-28 17:29:36.244788 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 6.15s 2025-05-28 17:29:36.244796 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.97s 2025-05-28 17:29:36.244804 | orchestrator | prometheus : Copying over config.json files ----------------------------- 5.60s 2025-05-28 17:29:36.244812 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.55s 2025-05-28 17:29:36.244819 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.86s 2025-05-28 17:29:36.244827 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.56s 2025-05-28 17:29:36.244838 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 3.51s 2025-05-28 17:29:36.244846 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 2.83s 2025-05-28 17:29:36.244854 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 2.50s 2025-05-28 17:29:36.244861 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 2.16s 2025-05-28 17:29:36.244869 | orchestrator | 2025-05-28 17:29:36 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:29:39.295513 | orchestrator | 2025-05-28 17:29:39 | INFO  | Task ffc1a8d8-c459-47a4-8999-43321493f5ee is in state STARTED 2025-05-28 17:29:39.297312 | orchestrator | 2025-05-28 17:29:39 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:29:39.299301 | orchestrator | 2025-05-28 17:29:39 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:29:39.300851 | orchestrator | 2025-05-28 17:29:39 | INFO  | Task 154b7b6a-a12e-4728-a520-89b16db1d970 is in state STARTED 2025-05-28 17:29:39.301455 | orchestrator | 2025-05-28 17:29:39 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:29:42.345724 | orchestrator | 2025-05-28 17:29:42 | INFO  | Task ffc1a8d8-c459-47a4-8999-43321493f5ee is in state STARTED 2025-05-28 17:29:42.348336 | orchestrator | 2025-05-28 17:29:42 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:29:42.350289 | orchestrator | 2025-05-28 17:29:42 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:29:42.352813 | orchestrator | 2025-05-28 17:29:42 | INFO  | Task 154b7b6a-a12e-4728-a520-89b16db1d970 is in state STARTED 2025-05-28 17:29:42.353409 | orchestrator | 2025-05-28 17:29:42 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:29:45.394166 | orchestrator | 2025-05-28 17:29:45 | INFO  | Task ffc1a8d8-c459-47a4-8999-43321493f5ee is in state STARTED 2025-05-28 17:29:45.394287 | orchestrator | 2025-05-28 17:29:45 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:29:45.396794 | orchestrator | 2025-05-28 17:29:45 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:29:45.396823 | orchestrator | 2025-05-28 17:29:45 | INFO  | Task 154b7b6a-a12e-4728-a520-89b16db1d970 is in state STARTED 2025-05-28 17:29:45.396835 | orchestrator | 2025-05-28 17:29:45 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:29:48.438344 | orchestrator | 2025-05-28 17:29:48 | INFO  | Task ffc1a8d8-c459-47a4-8999-43321493f5ee is in state STARTED 2025-05-28 17:29:48.439156 | orchestrator | 2025-05-28 17:29:48 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:29:48.439476 | orchestrator | 2025-05-28 17:29:48 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:29:48.440527 | orchestrator | 2025-05-28 17:29:48 | INFO  | Task 154b7b6a-a12e-4728-a520-89b16db1d970 is in state STARTED 2025-05-28 17:29:48.440570 | orchestrator | 2025-05-28 17:29:48 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:29:51.482378 | orchestrator | 2025-05-28 17:29:51 | INFO  | Task ffc1a8d8-c459-47a4-8999-43321493f5ee is in state STARTED 2025-05-28 17:29:51.484990 | orchestrator | 2025-05-28 17:29:51 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:29:51.485635 | orchestrator | 2025-05-28 17:29:51 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:29:51.487294 | orchestrator | 2025-05-28 17:29:51 | INFO  | Task 154b7b6a-a12e-4728-a520-89b16db1d970 is in state STARTED 2025-05-28 17:29:51.487572 | orchestrator | 2025-05-28 17:29:51 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:29:54.527591 | orchestrator | 2025-05-28 17:29:54 | INFO  | Task ffc1a8d8-c459-47a4-8999-43321493f5ee is in state STARTED 2025-05-28 17:29:54.529364 | orchestrator | 2025-05-28 17:29:54 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:29:54.532517 | orchestrator | 2025-05-28 17:29:54 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:29:54.532551 | orchestrator | 2025-05-28 17:29:54 | INFO  | Task 154b7b6a-a12e-4728-a520-89b16db1d970 is in state STARTED 2025-05-28 17:29:54.532563 | orchestrator | 2025-05-28 17:29:54 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:29:57.573784 | orchestrator | 2025-05-28 17:29:57 | INFO  | Task ffc1a8d8-c459-47a4-8999-43321493f5ee is in state STARTED 2025-05-28 17:29:57.579677 | orchestrator | 2025-05-28 17:29:57 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:29:57.579755 | orchestrator | 2025-05-28 17:29:57 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:29:57.579768 | orchestrator | 2025-05-28 17:29:57 | INFO  | Task 154b7b6a-a12e-4728-a520-89b16db1d970 is in state STARTED 2025-05-28 17:29:57.579780 | orchestrator | 2025-05-28 17:29:57 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:30:00.624930 | orchestrator | 2025-05-28 17:30:00 | INFO  | Task ffc1a8d8-c459-47a4-8999-43321493f5ee is in state STARTED 2025-05-28 17:30:00.625076 | orchestrator | 2025-05-28 17:30:00 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:30:00.626544 | orchestrator | 2025-05-28 17:30:00 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:30:00.627755 | orchestrator | 2025-05-28 17:30:00 | INFO  | Task 154b7b6a-a12e-4728-a520-89b16db1d970 is in state STARTED 2025-05-28 17:30:00.628438 | orchestrator | 2025-05-28 17:30:00 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:30:03.670477 | orchestrator | 2025-05-28 17:30:03 | INFO  | Task ffc1a8d8-c459-47a4-8999-43321493f5ee is in state STARTED 2025-05-28 17:30:03.670614 | orchestrator | 2025-05-28 17:30:03 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:30:03.673156 | orchestrator | 2025-05-28 17:30:03 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:30:03.673851 | orchestrator | 2025-05-28 17:30:03 | INFO  | Task 154b7b6a-a12e-4728-a520-89b16db1d970 is in state STARTED 2025-05-28 17:30:03.673949 | orchestrator | 2025-05-28 17:30:03 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:30:06.709686 | orchestrator | 2025-05-28 17:30:06 | INFO  | Task ffc1a8d8-c459-47a4-8999-43321493f5ee is in state SUCCESS 2025-05-28 17:30:06.710705 | orchestrator | 2025-05-28 17:30:06.710744 | orchestrator | 2025-05-28 17:30:06.710757 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-28 17:30:06.710771 | orchestrator | 2025-05-28 17:30:06.710782 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-28 17:30:06.710794 | orchestrator | Wednesday 28 May 2025 17:26:22 +0000 (0:00:00.316) 0:00:00.316 ********* 2025-05-28 17:30:06.710805 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:30:06.710818 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:30:06.710828 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:30:06.710847 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:30:06.710867 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:30:06.710887 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:30:06.710906 | orchestrator | 2025-05-28 17:30:06.710958 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-28 17:30:06.710978 | orchestrator | Wednesday 28 May 2025 17:26:23 +0000 (0:00:00.742) 0:00:01.059 ********* 2025-05-28 17:30:06.710997 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-05-28 17:30:06.711246 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-05-28 17:30:06.711265 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-05-28 17:30:06.711276 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-05-28 17:30:06.711287 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-05-28 17:30:06.711299 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-05-28 17:30:06.711313 | orchestrator | 2025-05-28 17:30:06.711635 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-05-28 17:30:06.711654 | orchestrator | 2025-05-28 17:30:06.711665 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-28 17:30:06.711676 | orchestrator | Wednesday 28 May 2025 17:26:23 +0000 (0:00:00.579) 0:00:01.638 ********* 2025-05-28 17:30:06.711688 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 17:30:06.711701 | orchestrator | 2025-05-28 17:30:06.711712 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-05-28 17:30:06.711724 | orchestrator | Wednesday 28 May 2025 17:26:25 +0000 (0:00:01.623) 0:00:03.261 ********* 2025-05-28 17:30:06.711735 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-05-28 17:30:06.711746 | orchestrator | 2025-05-28 17:30:06.712067 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-05-28 17:30:06.712089 | orchestrator | Wednesday 28 May 2025 17:26:28 +0000 (0:00:03.111) 0:00:06.373 ********* 2025-05-28 17:30:06.712146 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-05-28 17:30:06.712169 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-05-28 17:30:06.712188 | orchestrator | 2025-05-28 17:30:06.712207 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-05-28 17:30:06.712219 | orchestrator | Wednesday 28 May 2025 17:26:34 +0000 (0:00:05.788) 0:00:12.161 ********* 2025-05-28 17:30:06.712230 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-28 17:30:06.712241 | orchestrator | 2025-05-28 17:30:06.712252 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-05-28 17:30:06.712262 | orchestrator | Wednesday 28 May 2025 17:26:37 +0000 (0:00:02.970) 0:00:15.132 ********* 2025-05-28 17:30:06.712273 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-28 17:30:06.712302 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-05-28 17:30:06.712313 | orchestrator | 2025-05-28 17:30:06.712324 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-05-28 17:30:06.712334 | orchestrator | Wednesday 28 May 2025 17:26:40 +0000 (0:00:03.815) 0:00:18.948 ********* 2025-05-28 17:30:06.712345 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-28 17:30:06.712356 | orchestrator | 2025-05-28 17:30:06.712367 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-05-28 17:30:06.712377 | orchestrator | Wednesday 28 May 2025 17:26:44 +0000 (0:00:03.290) 0:00:22.238 ********* 2025-05-28 17:30:06.712388 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-05-28 17:30:06.712399 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-05-28 17:30:06.712409 | orchestrator | 2025-05-28 17:30:06.712420 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-05-28 17:30:06.712501 | orchestrator | Wednesday 28 May 2025 17:26:52 +0000 (0:00:07.972) 0:00:30.211 ********* 2025-05-28 17:30:06.712517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-28 17:30:06.712576 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-28 17:30:06.712591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-28 17:30:06.712616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-28 17:30:06.712636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-28 17:30:06.712648 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-28 17:30:06.712692 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-28 17:30:06.712706 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-28 17:30:06.712725 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-28 17:30:06.712742 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-28 17:30:06.712754 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-28 17:30:06.712765 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-28 17:30:06.712777 | orchestrator | 2025-05-28 17:30:06.712816 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-28 17:30:06.712830 | orchestrator | Wednesday 28 May 2025 17:26:54 +0000 (0:00:02.000) 0:00:32.211 ********* 2025-05-28 17:30:06.712841 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:30:06.712852 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:30:06.712862 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:30:06.712873 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:30:06.712884 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:30:06.712895 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:30:06.712905 | orchestrator | 2025-05-28 17:30:06.712923 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-28 17:30:06.712934 | orchestrator | Wednesday 28 May 2025 17:26:54 +0000 (0:00:00.517) 0:00:32.728 ********* 2025-05-28 17:30:06.712945 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:30:06.712955 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:30:06.712966 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:30:06.712976 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 17:30:06.712987 | orchestrator | 2025-05-28 17:30:06.712998 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-05-28 17:30:06.713009 | orchestrator | Wednesday 28 May 2025 17:26:55 +0000 (0:00:00.777) 0:00:33.506 ********* 2025-05-28 17:30:06.713019 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-05-28 17:30:06.713069 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-05-28 17:30:06.713081 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-05-28 17:30:06.713091 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-05-28 17:30:06.713102 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-05-28 17:30:06.713112 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-05-28 17:30:06.713123 | orchestrator | 2025-05-28 17:30:06.713136 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-05-28 17:30:06.713148 | orchestrator | Wednesday 28 May 2025 17:26:57 +0000 (0:00:01.735) 0:00:35.241 ********* 2025-05-28 17:30:06.713162 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-28 17:30:06.713182 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-28 17:30:06.713196 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-28 17:30:06.713252 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-28 17:30:06.713268 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-28 17:30:06.713280 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-28 17:30:06.713299 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-28 17:30:06.713314 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-28 17:30:06.713375 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-28 17:30:06.713395 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-28 17:30:06.713417 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-28 17:30:06.713444 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-28 17:30:06.713462 | orchestrator | 2025-05-28 17:30:06.713479 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-05-28 17:30:06.713497 | orchestrator | Wednesday 28 May 2025 17:27:01 +0000 (0:00:04.341) 0:00:39.583 ********* 2025-05-28 17:30:06.713515 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-05-28 17:30:06.713536 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-05-28 17:30:06.713554 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-05-28 17:30:06.713582 | orchestrator | 2025-05-28 17:30:06.713600 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-05-28 17:30:06.713618 | orchestrator | Wednesday 28 May 2025 17:27:03 +0000 (0:00:02.326) 0:00:41.910 ********* 2025-05-28 17:30:06.713636 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-05-28 17:30:06.713654 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-05-28 17:30:06.713672 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-05-28 17:30:06.713691 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-05-28 17:30:06.713704 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-05-28 17:30:06.713757 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-05-28 17:30:06.713770 | orchestrator | 2025-05-28 17:30:06.713780 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-05-28 17:30:06.713791 | orchestrator | Wednesday 28 May 2025 17:27:06 +0000 (0:00:02.875) 0:00:44.785 ********* 2025-05-28 17:30:06.713802 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-05-28 17:30:06.713813 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-05-28 17:30:06.713823 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-05-28 17:30:06.713834 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-05-28 17:30:06.713845 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-05-28 17:30:06.713855 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-05-28 17:30:06.713866 | orchestrator | 2025-05-28 17:30:06.713877 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-05-28 17:30:06.713888 | orchestrator | Wednesday 28 May 2025 17:27:07 +0000 (0:00:01.125) 0:00:45.910 ********* 2025-05-28 17:30:06.713898 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:30:06.713909 | orchestrator | 2025-05-28 17:30:06.713920 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-05-28 17:30:06.713930 | orchestrator | Wednesday 28 May 2025 17:27:08 +0000 (0:00:00.085) 0:00:45.996 ********* 2025-05-28 17:30:06.713941 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:30:06.713952 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:30:06.713963 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:30:06.713974 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:30:06.713984 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:30:06.713995 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:30:06.714005 | orchestrator | 2025-05-28 17:30:06.714139 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-28 17:30:06.714158 | orchestrator | Wednesday 28 May 2025 17:27:08 +0000 (0:00:00.488) 0:00:46.484 ********* 2025-05-28 17:30:06.714171 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 17:30:06.714183 | orchestrator | 2025-05-28 17:30:06.714194 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-05-28 17:30:06.714205 | orchestrator | Wednesday 28 May 2025 17:27:09 +0000 (0:00:01.103) 0:00:47.587 ********* 2025-05-28 17:30:06.714217 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-28 17:30:06.714247 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-28 17:30:06.714299 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-28 17:30:06.714314 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-28 17:30:06.714326 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-28 17:30:06.714342 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-28 17:30:06.714362 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-28 17:30:06.714373 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-28 17:30:06.714419 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-28 17:30:06.714433 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-28 17:30:06.714445 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-28 17:30:06.714464 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-28 17:30:06.714492 | orchestrator | 2025-05-28 17:30:06.714516 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-05-28 17:30:06.714535 | orchestrator | Wednesday 28 May 2025 17:27:12 +0000 (0:00:02.952) 0:00:50.540 ********* 2025-05-28 17:30:06.714555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-28 17:30:06.714582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 17:30:06.714593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-28 17:30:06.714603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 17:30:06.714613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-28 17:30:06.714635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 17:30:06.714645 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:30:06.714654 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:30:06.714664 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:30:06.714674 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-28 17:30:06.714693 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-28 17:30:06.714706 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:30:06.714723 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-28 17:30:06.714740 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-28 17:30:06.714766 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:30:06.714782 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-28 17:30:06.714793 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-28 17:30:06.714803 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:30:06.714813 | orchestrator | 2025-05-28 17:30:06.714822 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-05-28 17:30:06.714834 | orchestrator | Wednesday 28 May 2025 17:27:14 +0000 (0:00:01.557) 0:00:52.097 ********* 2025-05-28 17:30:06.714860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-28 17:30:06.714876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 17:30:06.714902 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:30:06.714919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-28 17:30:06.714942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 17:30:06.714961 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:30:06.714971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-28 17:30:06.714990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 17:30:06.715001 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:30:06.715011 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-28 17:30:06.715056 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-28 17:30:06.715068 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:30:06.715084 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-28 17:30:06.715095 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-28 17:30:06.715105 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:30:06.715121 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-28 17:30:06.715131 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-28 17:30:06.715150 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:30:06.715160 | orchestrator | 2025-05-28 17:30:06.715169 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-05-28 17:30:06.715179 | orchestrator | Wednesday 28 May 2025 17:27:15 +0000 (0:00:01.620) 0:00:53.718 ********* 2025-05-28 17:30:06.715189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-28 17:30:06.715204 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-28 17:30:06.715215 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-28 17:30:06.715232 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-28 17:30:06.715242 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-28 17:30:06.715259 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-28 17:30:06.715273 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-28 17:30:06.715283 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-28 17:30:06.715294 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-28 17:30:06.715309 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-28 17:30:06.715326 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-28 17:30:06.715336 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-28 17:30:06.715346 | orchestrator | 2025-05-28 17:30:06.715356 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-05-28 17:30:06.715365 | orchestrator | Wednesday 28 May 2025 17:27:18 +0000 (0:00:02.782) 0:00:56.501 ********* 2025-05-28 17:30:06.715375 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-05-28 17:30:06.715384 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:30:06.715394 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-05-28 17:30:06.715404 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:30:06.715536 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-05-28 17:30:06.715548 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:30:06.715558 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-05-28 17:30:06.715568 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-05-28 17:30:06.715577 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-05-28 17:30:06.715587 | orchestrator | 2025-05-28 17:30:06.715596 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-05-28 17:30:06.715606 | orchestrator | Wednesday 28 May 2025 17:27:21 +0000 (0:00:02.677) 0:00:59.178 ********* 2025-05-28 17:30:06.715616 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-28 17:30:06.715634 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-28 17:30:06.715653 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-28 17:30:06.715663 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-28 17:30:06.715678 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-28 17:30:06.715694 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-28 17:30:06.715712 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-28 17:30:06.715723 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-28 17:30:06.715733 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-28 17:30:06.715748 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-28 17:30:06.715758 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-28 17:30:06.715768 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-28 17:30:06.715784 | orchestrator | 2025-05-28 17:30:06.715795 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-05-28 17:30:06.715804 | orchestrator | Wednesday 28 May 2025 17:27:31 +0000 (0:00:09.824) 0:01:09.002 ********* 2025-05-28 17:30:06.715819 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:30:06.715829 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:30:06.715838 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:30:06.715848 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:30:06.715857 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:30:06.715867 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:30:06.715876 | orchestrator | 2025-05-28 17:30:06.715886 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-05-28 17:30:06.715896 | orchestrator | Wednesday 28 May 2025 17:27:34 +0000 (0:00:03.166) 0:01:12.169 ********* 2025-05-28 17:30:06.715906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-28 17:30:06.715916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 17:30:06.715926 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:30:06.715941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-28 17:30:06.715951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 17:30:06.715967 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:30:06.715983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-28 17:30:06.715993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 17:30:06.716003 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:30:06.716013 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-28 17:30:06.716023 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-28 17:30:06.716058 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:30:06.716077 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-28 17:30:06.716100 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-28 17:30:06.716110 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:30:06.716126 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-28 17:30:06.716136 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-28 17:30:06.716146 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:30:06.716156 | orchestrator | 2025-05-28 17:30:06.716165 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-05-28 17:30:06.716175 | orchestrator | Wednesday 28 May 2025 17:27:35 +0000 (0:00:01.073) 0:01:13.243 ********* 2025-05-28 17:30:06.716184 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:30:06.716194 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:30:06.716203 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:30:06.716212 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:30:06.716222 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:30:06.716231 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:30:06.716241 | orchestrator | 2025-05-28 17:30:06.716315 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-05-28 17:30:06.716328 | orchestrator | Wednesday 28 May 2025 17:27:36 +0000 (0:00:00.779) 0:01:14.023 ********* 2025-05-28 17:30:06.716350 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-28 17:30:06.716383 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-28 17:30:06.716411 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-28 17:30:06.716423 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-28 17:30:06.716433 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-28 17:30:06.716455 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-28 17:30:06.716466 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-28 17:30:06.716560 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-28 17:30:06.716575 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-28 17:30:06.716589 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-28 17:30:06.716606 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-28 17:30:06.716642 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-28 17:30:06.716659 | orchestrator | 2025-05-28 17:30:06.716675 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-28 17:30:06.716693 | orchestrator | Wednesday 28 May 2025 17:27:38 +0000 (0:00:02.384) 0:01:16.407 ********* 2025-05-28 17:30:06.716710 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:30:06.716728 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:30:06.716737 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:30:06.716747 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:30:06.716756 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:30:06.716765 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:30:06.716775 | orchestrator | 2025-05-28 17:30:06.716784 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-05-28 17:30:06.716794 | orchestrator | Wednesday 28 May 2025 17:27:39 +0000 (0:00:01.194) 0:01:17.601 ********* 2025-05-28 17:30:06.716803 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:30:06.716813 | orchestrator | 2025-05-28 17:30:06.716822 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-05-28 17:30:06.716831 | orchestrator | Wednesday 28 May 2025 17:27:41 +0000 (0:00:02.106) 0:01:19.708 ********* 2025-05-28 17:30:06.716841 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:30:06.716850 | orchestrator | 2025-05-28 17:30:06.716860 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-05-28 17:30:06.716869 | orchestrator | Wednesday 28 May 2025 17:27:43 +0000 (0:00:02.061) 0:01:21.770 ********* 2025-05-28 17:30:06.716878 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:30:06.716888 | orchestrator | 2025-05-28 17:30:06.716897 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-28 17:30:06.716907 | orchestrator | Wednesday 28 May 2025 17:28:00 +0000 (0:00:17.163) 0:01:38.934 ********* 2025-05-28 17:30:06.716916 | orchestrator | 2025-05-28 17:30:06.716932 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-28 17:30:06.716942 | orchestrator | Wednesday 28 May 2025 17:28:01 +0000 (0:00:00.063) 0:01:38.997 ********* 2025-05-28 17:30:06.716951 | orchestrator | 2025-05-28 17:30:06.716961 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-28 17:30:06.716970 | orchestrator | Wednesday 28 May 2025 17:28:01 +0000 (0:00:00.061) 0:01:39.059 ********* 2025-05-28 17:30:06.716980 | orchestrator | 2025-05-28 17:30:06.716989 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-28 17:30:06.716998 | orchestrator | Wednesday 28 May 2025 17:28:01 +0000 (0:00:00.062) 0:01:39.121 ********* 2025-05-28 17:30:06.717008 | orchestrator | 2025-05-28 17:30:06.717017 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-28 17:30:06.717027 | orchestrator | Wednesday 28 May 2025 17:28:01 +0000 (0:00:00.062) 0:01:39.183 ********* 2025-05-28 17:30:06.717061 | orchestrator | 2025-05-28 17:30:06.717071 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-28 17:30:06.717080 | orchestrator | Wednesday 28 May 2025 17:28:01 +0000 (0:00:00.059) 0:01:39.243 ********* 2025-05-28 17:30:06.717090 | orchestrator | 2025-05-28 17:30:06.717099 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-05-28 17:30:06.717118 | orchestrator | Wednesday 28 May 2025 17:28:01 +0000 (0:00:00.064) 0:01:39.307 ********* 2025-05-28 17:30:06.717128 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:30:06.717138 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:30:06.717147 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:30:06.717157 | orchestrator | 2025-05-28 17:30:06.717166 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-05-28 17:30:06.717176 | orchestrator | Wednesday 28 May 2025 17:28:24 +0000 (0:00:23.417) 0:02:02.725 ********* 2025-05-28 17:30:06.717185 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:30:06.717197 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:30:06.717208 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:30:06.717220 | orchestrator | 2025-05-28 17:30:06.717230 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-05-28 17:30:06.717242 | orchestrator | Wednesday 28 May 2025 17:28:31 +0000 (0:00:06.567) 0:02:09.292 ********* 2025-05-28 17:30:06.717253 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:30:06.717264 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:30:06.717275 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:30:06.717286 | orchestrator | 2025-05-28 17:30:06.717296 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-05-28 17:30:06.717307 | orchestrator | Wednesday 28 May 2025 17:29:55 +0000 (0:01:24.423) 0:03:33.715 ********* 2025-05-28 17:30:06.717319 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:30:06.717329 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:30:06.717340 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:30:06.717352 | orchestrator | 2025-05-28 17:30:06.717363 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-05-28 17:30:06.717374 | orchestrator | Wednesday 28 May 2025 17:30:02 +0000 (0:00:06.904) 0:03:40.620 ********* 2025-05-28 17:30:06.717386 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:30:06.717397 | orchestrator | 2025-05-28 17:30:06.717408 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 17:30:06.717420 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-05-28 17:30:06.717433 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-05-28 17:30:06.717451 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-05-28 17:30:06.717462 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-28 17:30:06.717472 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-28 17:30:06.717482 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-28 17:30:06.717491 | orchestrator | 2025-05-28 17:30:06.717501 | orchestrator | 2025-05-28 17:30:06.717510 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 17:30:06.717520 | orchestrator | Wednesday 28 May 2025 17:30:03 +0000 (0:00:00.918) 0:03:41.539 ********* 2025-05-28 17:30:06.717530 | orchestrator | =============================================================================== 2025-05-28 17:30:06.717539 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 84.42s 2025-05-28 17:30:06.717549 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 23.42s 2025-05-28 17:30:06.717558 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 17.16s 2025-05-28 17:30:06.717568 | orchestrator | cinder : Copying over cinder.conf --------------------------------------- 9.82s 2025-05-28 17:30:06.717577 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.97s 2025-05-28 17:30:06.717593 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 6.90s 2025-05-28 17:30:06.717603 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 6.57s 2025-05-28 17:30:06.717612 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 5.79s 2025-05-28 17:30:06.717627 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 4.34s 2025-05-28 17:30:06.717637 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.82s 2025-05-28 17:30:06.717647 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.29s 2025-05-28 17:30:06.717656 | orchestrator | cinder : Generating 'hostnqn' file for cinder_volume -------------------- 3.17s 2025-05-28 17:30:06.717666 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.11s 2025-05-28 17:30:06.717675 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 2.97s 2025-05-28 17:30:06.717684 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 2.95s 2025-05-28 17:30:06.717694 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 2.88s 2025-05-28 17:30:06.717703 | orchestrator | cinder : Copying over config.json files for services -------------------- 2.78s 2025-05-28 17:30:06.717713 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 2.67s 2025-05-28 17:30:06.717722 | orchestrator | cinder : Check cinder containers ---------------------------------------- 2.38s 2025-05-28 17:30:06.717732 | orchestrator | cinder : Copy over Ceph keyring files for cinder-volume ----------------- 2.33s 2025-05-28 17:30:06.717741 | orchestrator | 2025-05-28 17:30:06 | INFO  | Task e47709b4-5c24-45ff-8c5e-37701531d5ee is in state STARTED 2025-05-28 17:30:06.717751 | orchestrator | 2025-05-28 17:30:06 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:30:06.717761 | orchestrator | 2025-05-28 17:30:06 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:30:06.717770 | orchestrator | 2025-05-28 17:30:06 | INFO  | Task 154b7b6a-a12e-4728-a520-89b16db1d970 is in state STARTED 2025-05-28 17:30:06.717780 | orchestrator | 2025-05-28 17:30:06 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:30:09.749445 | orchestrator | 2025-05-28 17:30:09 | INFO  | Task e47709b4-5c24-45ff-8c5e-37701531d5ee is in state STARTED 2025-05-28 17:30:09.749576 | orchestrator | 2025-05-28 17:30:09 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:30:09.753118 | orchestrator | 2025-05-28 17:30:09 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:30:09.753817 | orchestrator | 2025-05-28 17:30:09 | INFO  | Task 154b7b6a-a12e-4728-a520-89b16db1d970 is in state STARTED 2025-05-28 17:30:09.753851 | orchestrator | 2025-05-28 17:30:09 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:30:12.786575 | orchestrator | 2025-05-28 17:30:12 | INFO  | Task e47709b4-5c24-45ff-8c5e-37701531d5ee is in state STARTED 2025-05-28 17:30:12.787196 | orchestrator | 2025-05-28 17:30:12 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:30:12.789413 | orchestrator | 2025-05-28 17:30:12 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:30:12.790833 | orchestrator | 2025-05-28 17:30:12 | INFO  | Task 154b7b6a-a12e-4728-a520-89b16db1d970 is in state STARTED 2025-05-28 17:30:12.790932 | orchestrator | 2025-05-28 17:30:12 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:30:15.824966 | orchestrator | 2025-05-28 17:30:15 | INFO  | Task e47709b4-5c24-45ff-8c5e-37701531d5ee is in state STARTED 2025-05-28 17:30:15.825135 | orchestrator | 2025-05-28 17:30:15 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:30:15.825183 | orchestrator | 2025-05-28 17:30:15 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:30:15.825562 | orchestrator | 2025-05-28 17:30:15 | INFO  | Task 154b7b6a-a12e-4728-a520-89b16db1d970 is in state STARTED 2025-05-28 17:30:15.825587 | orchestrator | 2025-05-28 17:30:15 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:30:18.853927 | orchestrator | 2025-05-28 17:30:18 | INFO  | Task e47709b4-5c24-45ff-8c5e-37701531d5ee is in state STARTED 2025-05-28 17:30:18.855366 | orchestrator | 2025-05-28 17:30:18 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:30:18.855470 | orchestrator | 2025-05-28 17:30:18 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:30:18.855482 | orchestrator | 2025-05-28 17:30:18 | INFO  | Task 154b7b6a-a12e-4728-a520-89b16db1d970 is in state STARTED 2025-05-28 17:30:18.855499 | orchestrator | 2025-05-28 17:30:18 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:30:21.879727 | orchestrator | 2025-05-28 17:30:21 | INFO  | Task e47709b4-5c24-45ff-8c5e-37701531d5ee is in state STARTED 2025-05-28 17:30:21.879839 | orchestrator | 2025-05-28 17:30:21 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:30:21.879863 | orchestrator | 2025-05-28 17:30:21 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:30:21.880499 | orchestrator | 2025-05-28 17:30:21 | INFO  | Task 154b7b6a-a12e-4728-a520-89b16db1d970 is in state STARTED 2025-05-28 17:30:21.880518 | orchestrator | 2025-05-28 17:30:21 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:30:24.910254 | orchestrator | 2025-05-28 17:30:24 | INFO  | Task e47709b4-5c24-45ff-8c5e-37701531d5ee is in state STARTED 2025-05-28 17:30:24.910383 | orchestrator | 2025-05-28 17:30:24 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:30:24.910762 | orchestrator | 2025-05-28 17:30:24 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:30:24.911411 | orchestrator | 2025-05-28 17:30:24 | INFO  | Task 154b7b6a-a12e-4728-a520-89b16db1d970 is in state STARTED 2025-05-28 17:30:24.911430 | orchestrator | 2025-05-28 17:30:24 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:30:27.940555 | orchestrator | 2025-05-28 17:30:27 | INFO  | Task e47709b4-5c24-45ff-8c5e-37701531d5ee is in state STARTED 2025-05-28 17:30:27.940692 | orchestrator | 2025-05-28 17:30:27 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:30:27.940707 | orchestrator | 2025-05-28 17:30:27 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:30:27.941086 | orchestrator | 2025-05-28 17:30:27 | INFO  | Task 154b7b6a-a12e-4728-a520-89b16db1d970 is in state STARTED 2025-05-28 17:30:27.941828 | orchestrator | 2025-05-28 17:30:27 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:30:30.982622 | orchestrator | 2025-05-28 17:30:30 | INFO  | Task e47709b4-5c24-45ff-8c5e-37701531d5ee is in state STARTED 2025-05-28 17:30:30.982692 | orchestrator | 2025-05-28 17:30:30 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:30:30.983901 | orchestrator | 2025-05-28 17:30:30 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:30:30.984310 | orchestrator | 2025-05-28 17:30:30 | INFO  | Task 154b7b6a-a12e-4728-a520-89b16db1d970 is in state STARTED 2025-05-28 17:30:30.984332 | orchestrator | 2025-05-28 17:30:30 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:30:34.028436 | orchestrator | 2025-05-28 17:30:34 | INFO  | Task e47709b4-5c24-45ff-8c5e-37701531d5ee is in state STARTED 2025-05-28 17:30:34.028654 | orchestrator | 2025-05-28 17:30:34 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:30:34.029707 | orchestrator | 2025-05-28 17:30:34 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:30:34.030648 | orchestrator | 2025-05-28 17:30:34 | INFO  | Task 154b7b6a-a12e-4728-a520-89b16db1d970 is in state STARTED 2025-05-28 17:30:34.030863 | orchestrator | 2025-05-28 17:30:34 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:30:37.074144 | orchestrator | 2025-05-28 17:30:37 | INFO  | Task e47709b4-5c24-45ff-8c5e-37701531d5ee is in state STARTED 2025-05-28 17:30:37.074265 | orchestrator | 2025-05-28 17:30:37 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:30:37.074664 | orchestrator | 2025-05-28 17:30:37 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:30:37.075452 | orchestrator | 2025-05-28 17:30:37 | INFO  | Task 154b7b6a-a12e-4728-a520-89b16db1d970 is in state STARTED 2025-05-28 17:30:37.075479 | orchestrator | 2025-05-28 17:30:37 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:30:40.113098 | orchestrator | 2025-05-28 17:30:40 | INFO  | Task e47709b4-5c24-45ff-8c5e-37701531d5ee is in state STARTED 2025-05-28 17:30:40.113226 | orchestrator | 2025-05-28 17:30:40 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:30:40.115089 | orchestrator | 2025-05-28 17:30:40 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:30:40.115115 | orchestrator | 2025-05-28 17:30:40 | INFO  | Task 154b7b6a-a12e-4728-a520-89b16db1d970 is in state STARTED 2025-05-28 17:30:40.115127 | orchestrator | 2025-05-28 17:30:40 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:30:43.142412 | orchestrator | 2025-05-28 17:30:43 | INFO  | Task e47709b4-5c24-45ff-8c5e-37701531d5ee is in state STARTED 2025-05-28 17:30:43.143868 | orchestrator | 2025-05-28 17:30:43 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:30:43.144542 | orchestrator | 2025-05-28 17:30:43 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:30:43.145572 | orchestrator | 2025-05-28 17:30:43 | INFO  | Task 154b7b6a-a12e-4728-a520-89b16db1d970 is in state STARTED 2025-05-28 17:30:43.145666 | orchestrator | 2025-05-28 17:30:43 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:30:46.168663 | orchestrator | 2025-05-28 17:30:46 | INFO  | Task e47709b4-5c24-45ff-8c5e-37701531d5ee is in state STARTED 2025-05-28 17:30:46.168795 | orchestrator | 2025-05-28 17:30:46 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:30:46.170194 | orchestrator | 2025-05-28 17:30:46 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:30:46.170674 | orchestrator | 2025-05-28 17:30:46 | INFO  | Task 154b7b6a-a12e-4728-a520-89b16db1d970 is in state STARTED 2025-05-28 17:30:46.170696 | orchestrator | 2025-05-28 17:30:46 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:30:49.195763 | orchestrator | 2025-05-28 17:30:49 | INFO  | Task e47709b4-5c24-45ff-8c5e-37701531d5ee is in state STARTED 2025-05-28 17:30:49.195873 | orchestrator | 2025-05-28 17:30:49 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:30:49.196414 | orchestrator | 2025-05-28 17:30:49 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:30:49.196891 | orchestrator | 2025-05-28 17:30:49 | INFO  | Task 154b7b6a-a12e-4728-a520-89b16db1d970 is in state STARTED 2025-05-28 17:30:49.196954 | orchestrator | 2025-05-28 17:30:49 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:30:52.224460 | orchestrator | 2025-05-28 17:30:52 | INFO  | Task e47709b4-5c24-45ff-8c5e-37701531d5ee is in state STARTED 2025-05-28 17:30:52.224695 | orchestrator | 2025-05-28 17:30:52 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:30:52.225470 | orchestrator | 2025-05-28 17:30:52 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:30:52.226156 | orchestrator | 2025-05-28 17:30:52 | INFO  | Task 154b7b6a-a12e-4728-a520-89b16db1d970 is in state STARTED 2025-05-28 17:30:52.226184 | orchestrator | 2025-05-28 17:30:52 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:30:55.261352 | orchestrator | 2025-05-28 17:30:55 | INFO  | Task e47709b4-5c24-45ff-8c5e-37701531d5ee is in state STARTED 2025-05-28 17:30:55.261455 | orchestrator | 2025-05-28 17:30:55 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:30:55.262993 | orchestrator | 2025-05-28 17:30:55 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:30:55.263343 | orchestrator | 2025-05-28 17:30:55 | INFO  | Task 154b7b6a-a12e-4728-a520-89b16db1d970 is in state STARTED 2025-05-28 17:30:55.263368 | orchestrator | 2025-05-28 17:30:55 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:30:58.286578 | orchestrator | 2025-05-28 17:30:58 | INFO  | Task e47709b4-5c24-45ff-8c5e-37701531d5ee is in state STARTED 2025-05-28 17:30:58.286683 | orchestrator | 2025-05-28 17:30:58 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:30:58.286904 | orchestrator | 2025-05-28 17:30:58 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:30:58.287511 | orchestrator | 2025-05-28 17:30:58 | INFO  | Task 154b7b6a-a12e-4728-a520-89b16db1d970 is in state STARTED 2025-05-28 17:30:58.287542 | orchestrator | 2025-05-28 17:30:58 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:31:01.319418 | orchestrator | 2025-05-28 17:31:01 | INFO  | Task e47709b4-5c24-45ff-8c5e-37701531d5ee is in state STARTED 2025-05-28 17:31:01.319645 | orchestrator | 2025-05-28 17:31:01 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:31:01.319659 | orchestrator | 2025-05-28 17:31:01 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:31:01.319681 | orchestrator | 2025-05-28 17:31:01 | INFO  | Task 154b7b6a-a12e-4728-a520-89b16db1d970 is in state STARTED 2025-05-28 17:31:01.319691 | orchestrator | 2025-05-28 17:31:01 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:31:04.342943 | orchestrator | 2025-05-28 17:31:04 | INFO  | Task e47709b4-5c24-45ff-8c5e-37701531d5ee is in state STARTED 2025-05-28 17:31:04.343147 | orchestrator | 2025-05-28 17:31:04 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:31:04.343163 | orchestrator | 2025-05-28 17:31:04 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:31:04.343176 | orchestrator | 2025-05-28 17:31:04 | INFO  | Task 154b7b6a-a12e-4728-a520-89b16db1d970 is in state STARTED 2025-05-28 17:31:04.343187 | orchestrator | 2025-05-28 17:31:04 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:31:07.367856 | orchestrator | 2025-05-28 17:31:07 | INFO  | Task e47709b4-5c24-45ff-8c5e-37701531d5ee is in state STARTED 2025-05-28 17:31:07.368161 | orchestrator | 2025-05-28 17:31:07 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:31:07.368246 | orchestrator | 2025-05-28 17:31:07 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:31:07.368533 | orchestrator | 2025-05-28 17:31:07 | INFO  | Task 154b7b6a-a12e-4728-a520-89b16db1d970 is in state STARTED 2025-05-28 17:31:07.369228 | orchestrator | 2025-05-28 17:31:07 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:31:10.393862 | orchestrator | 2025-05-28 17:31:10 | INFO  | Task e47709b4-5c24-45ff-8c5e-37701531d5ee is in state STARTED 2025-05-28 17:31:10.393994 | orchestrator | 2025-05-28 17:31:10 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:31:10.394770 | orchestrator | 2025-05-28 17:31:10 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:31:10.395695 | orchestrator | 2025-05-28 17:31:10 | INFO  | Task 154b7b6a-a12e-4728-a520-89b16db1d970 is in state STARTED 2025-05-28 17:31:10.395739 | orchestrator | 2025-05-28 17:31:10 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:31:13.427279 | orchestrator | 2025-05-28 17:31:13 | INFO  | Task e47709b4-5c24-45ff-8c5e-37701531d5ee is in state STARTED 2025-05-28 17:31:13.427496 | orchestrator | 2025-05-28 17:31:13 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:31:13.427959 | orchestrator | 2025-05-28 17:31:13 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:31:13.428649 | orchestrator | 2025-05-28 17:31:13 | INFO  | Task 154b7b6a-a12e-4728-a520-89b16db1d970 is in state STARTED 2025-05-28 17:31:13.428658 | orchestrator | 2025-05-28 17:31:13 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:31:16.457737 | orchestrator | 2025-05-28 17:31:16 | INFO  | Task e47709b4-5c24-45ff-8c5e-37701531d5ee is in state STARTED 2025-05-28 17:31:16.457878 | orchestrator | 2025-05-28 17:31:16 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:31:16.457895 | orchestrator | 2025-05-28 17:31:16 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:31:16.458560 | orchestrator | 2025-05-28 17:31:16 | INFO  | Task 154b7b6a-a12e-4728-a520-89b16db1d970 is in state STARTED 2025-05-28 17:31:16.458658 | orchestrator | 2025-05-28 17:31:16 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:31:19.488131 | orchestrator | 2025-05-28 17:31:19 | INFO  | Task e47709b4-5c24-45ff-8c5e-37701531d5ee is in state STARTED 2025-05-28 17:31:19.488253 | orchestrator | 2025-05-28 17:31:19 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:31:19.488268 | orchestrator | 2025-05-28 17:31:19 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:31:19.488280 | orchestrator | 2025-05-28 17:31:19 | INFO  | Task 154b7b6a-a12e-4728-a520-89b16db1d970 is in state STARTED 2025-05-28 17:31:19.488291 | orchestrator | 2025-05-28 17:31:19 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:31:22.523365 | orchestrator | 2025-05-28 17:31:22 | INFO  | Task e47709b4-5c24-45ff-8c5e-37701531d5ee is in state STARTED 2025-05-28 17:31:22.523501 | orchestrator | 2025-05-28 17:31:22 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:31:22.523518 | orchestrator | 2025-05-28 17:31:22 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:31:22.526078 | orchestrator | 2025-05-28 17:31:22 | INFO  | Task 154b7b6a-a12e-4728-a520-89b16db1d970 is in state STARTED 2025-05-28 17:31:22.526179 | orchestrator | 2025-05-28 17:31:22 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:31:25.553105 | orchestrator | 2025-05-28 17:31:25 | INFO  | Task e47709b4-5c24-45ff-8c5e-37701531d5ee is in state STARTED 2025-05-28 17:31:25.553241 | orchestrator | 2025-05-28 17:31:25 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:31:25.553256 | orchestrator | 2025-05-28 17:31:25 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:31:25.553268 | orchestrator | 2025-05-28 17:31:25 | INFO  | Task 154b7b6a-a12e-4728-a520-89b16db1d970 is in state STARTED 2025-05-28 17:31:25.553280 | orchestrator | 2025-05-28 17:31:25 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:31:28.579525 | orchestrator | 2025-05-28 17:31:28 | INFO  | Task e47709b4-5c24-45ff-8c5e-37701531d5ee is in state STARTED 2025-05-28 17:31:28.580176 | orchestrator | 2025-05-28 17:31:28 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:31:28.580600 | orchestrator | 2025-05-28 17:31:28 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:31:28.581131 | orchestrator | 2025-05-28 17:31:28 | INFO  | Task 154b7b6a-a12e-4728-a520-89b16db1d970 is in state STARTED 2025-05-28 17:31:28.581153 | orchestrator | 2025-05-28 17:31:28 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:31:31.622214 | orchestrator | 2025-05-28 17:31:31 | INFO  | Task e47709b4-5c24-45ff-8c5e-37701531d5ee is in state STARTED 2025-05-28 17:31:31.622547 | orchestrator | 2025-05-28 17:31:31 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:31:31.622980 | orchestrator | 2025-05-28 17:31:31 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:31:31.626895 | orchestrator | 2025-05-28 17:31:31 | INFO  | Task 154b7b6a-a12e-4728-a520-89b16db1d970 is in state STARTED 2025-05-28 17:31:31.626926 | orchestrator | 2025-05-28 17:31:31 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:31:34.657444 | orchestrator | 2025-05-28 17:31:34 | INFO  | Task e47709b4-5c24-45ff-8c5e-37701531d5ee is in state STARTED 2025-05-28 17:31:34.657688 | orchestrator | 2025-05-28 17:31:34 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:31:34.657724 | orchestrator | 2025-05-28 17:31:34 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:31:34.658395 | orchestrator | 2025-05-28 17:31:34 | INFO  | Task 6138a53e-05e9-47c4-84bf-b7ab65960f9f is in state STARTED 2025-05-28 17:31:34.659616 | orchestrator | 2025-05-28 17:31:34.659643 | orchestrator | 2025-05-28 17:31:34 | INFO  | Task 154b7b6a-a12e-4728-a520-89b16db1d970 is in state SUCCESS 2025-05-28 17:31:34.661407 | orchestrator | 2025-05-28 17:31:34.661443 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-28 17:31:34.661457 | orchestrator | 2025-05-28 17:31:34.661468 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-28 17:31:34.661480 | orchestrator | Wednesday 28 May 2025 17:29:40 +0000 (0:00:00.250) 0:00:00.250 ********* 2025-05-28 17:31:34.661491 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:31:34.661503 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:31:34.661514 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:31:34.661524 | orchestrator | 2025-05-28 17:31:34.661536 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-28 17:31:34.661547 | orchestrator | Wednesday 28 May 2025 17:29:40 +0000 (0:00:00.292) 0:00:00.542 ********* 2025-05-28 17:31:34.661558 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-05-28 17:31:34.661570 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-05-28 17:31:34.661601 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-05-28 17:31:34.661612 | orchestrator | 2025-05-28 17:31:34.661622 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-05-28 17:31:34.661657 | orchestrator | 2025-05-28 17:31:34.661669 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-05-28 17:31:34.661679 | orchestrator | Wednesday 28 May 2025 17:29:40 +0000 (0:00:00.401) 0:00:00.944 ********* 2025-05-28 17:31:34.661690 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:31:34.661702 | orchestrator | 2025-05-28 17:31:34.661713 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-05-28 17:31:34.661724 | orchestrator | Wednesday 28 May 2025 17:29:41 +0000 (0:00:00.556) 0:00:01.501 ********* 2025-05-28 17:31:34.661735 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-05-28 17:31:34.661746 | orchestrator | 2025-05-28 17:31:34.661756 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-05-28 17:31:34.661767 | orchestrator | Wednesday 28 May 2025 17:29:44 +0000 (0:00:03.274) 0:00:04.775 ********* 2025-05-28 17:31:34.661778 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-05-28 17:31:34.661789 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-05-28 17:31:34.661800 | orchestrator | 2025-05-28 17:31:34.661810 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-05-28 17:31:34.661821 | orchestrator | Wednesday 28 May 2025 17:29:50 +0000 (0:00:06.100) 0:00:10.876 ********* 2025-05-28 17:31:34.661831 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-28 17:31:34.661843 | orchestrator | 2025-05-28 17:31:34.661853 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-05-28 17:31:34.661864 | orchestrator | Wednesday 28 May 2025 17:29:53 +0000 (0:00:03.199) 0:00:14.075 ********* 2025-05-28 17:31:34.661875 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-28 17:31:34.661885 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-05-28 17:31:34.661896 | orchestrator | 2025-05-28 17:31:34.661906 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-05-28 17:31:34.661917 | orchestrator | Wednesday 28 May 2025 17:29:57 +0000 (0:00:04.012) 0:00:18.088 ********* 2025-05-28 17:31:34.661928 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-28 17:31:34.661939 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-05-28 17:31:34.661949 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-05-28 17:31:34.661961 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-05-28 17:31:34.661972 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-05-28 17:31:34.662009 | orchestrator | 2025-05-28 17:31:34.662074 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-05-28 17:31:34.662087 | orchestrator | Wednesday 28 May 2025 17:30:13 +0000 (0:00:15.577) 0:00:33.666 ********* 2025-05-28 17:31:34.662099 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-05-28 17:31:34.662111 | orchestrator | 2025-05-28 17:31:34.662123 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-05-28 17:31:34.662136 | orchestrator | Wednesday 28 May 2025 17:30:17 +0000 (0:00:04.295) 0:00:37.961 ********* 2025-05-28 17:31:34.662152 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-28 17:31:34.662198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-28 17:31:34.662213 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-28 17:31:34.662227 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-28 17:31:34.662241 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-28 17:31:34.662253 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-28 17:31:34.662283 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-28 17:31:34.662303 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-28 17:31:34.662317 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-28 17:31:34.662329 | orchestrator | 2025-05-28 17:31:34.662341 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-05-28 17:31:34.662354 | orchestrator | Wednesday 28 May 2025 17:30:19 +0000 (0:00:01.822) 0:00:39.784 ********* 2025-05-28 17:31:34.662366 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-05-28 17:31:34.662377 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-05-28 17:31:34.662387 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-05-28 17:31:34.662398 | orchestrator | 2025-05-28 17:31:34.662411 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-05-28 17:31:34.662429 | orchestrator | Wednesday 28 May 2025 17:30:20 +0000 (0:00:01.229) 0:00:41.013 ********* 2025-05-28 17:31:34.662445 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:31:34.662461 | orchestrator | 2025-05-28 17:31:34.662477 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-05-28 17:31:34.662494 | orchestrator | Wednesday 28 May 2025 17:30:21 +0000 (0:00:00.203) 0:00:41.217 ********* 2025-05-28 17:31:34.662511 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:31:34.662529 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:31:34.662544 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:31:34.662561 | orchestrator | 2025-05-28 17:31:34.662580 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-05-28 17:31:34.662598 | orchestrator | Wednesday 28 May 2025 17:30:21 +0000 (0:00:00.606) 0:00:41.823 ********* 2025-05-28 17:31:34.662616 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:31:34.662634 | orchestrator | 2025-05-28 17:31:34.662646 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-05-28 17:31:34.662657 | orchestrator | Wednesday 28 May 2025 17:30:22 +0000 (0:00:00.737) 0:00:42.560 ********* 2025-05-28 17:31:34.662668 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-28 17:31:34.662699 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-28 17:31:34.662718 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-28 17:31:34.662730 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-28 17:31:34.662742 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-28 17:31:34.662753 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-28 17:31:34.662771 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-28 17:31:34.662791 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-28 17:31:34.662809 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-28 17:31:34.662820 | orchestrator | 2025-05-28 17:31:34.662831 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-05-28 17:31:34.662843 | orchestrator | Wednesday 28 May 2025 17:30:25 +0000 (0:00:03.376) 0:00:45.937 ********* 2025-05-28 17:31:34.662854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-28 17:31:34.662866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-28 17:31:34.662883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-28 17:31:34.662894 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:31:34.662913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-28 17:31:34.662930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-28 17:31:34.662942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-28 17:31:34.662953 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:31:34.662965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-28 17:31:34.663022 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-28 17:31:34.663037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-28 17:31:34.663048 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:31:34.663059 | orchestrator | 2025-05-28 17:31:34.663069 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-05-28 17:31:34.663080 | orchestrator | Wednesday 28 May 2025 17:30:26 +0000 (0:00:00.731) 0:00:46.669 ********* 2025-05-28 17:31:34.663099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-28 17:31:34.663116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-28 17:31:34.663128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-28 17:31:34.663139 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:31:34.663150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-28 17:31:34.663176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-28 17:31:34.663187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-28 17:31:34.663199 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:31:34.663223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-28 17:31:34.663235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-28 17:31:34.663246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-28 17:31:34.663264 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:31:34.663275 | orchestrator | 2025-05-28 17:31:34.663286 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-05-28 17:31:34.663297 | orchestrator | Wednesday 28 May 2025 17:30:27 +0000 (0:00:00.962) 0:00:47.631 ********* 2025-05-28 17:31:34.663308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-28 17:31:34.663326 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-28 17:31:34.663342 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-28 17:31:34.663355 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-28 17:31:34.663372 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-28 17:31:34.663383 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-28 17:31:34.663395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-28 17:31:34.663411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-28 17:31:34.663423 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-28 17:31:34.663434 | orchestrator | 2025-05-28 17:31:34.663450 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-05-28 17:31:34.663461 | orchestrator | Wednesday 28 May 2025 17:30:31 +0000 (0:00:03.683) 0:00:51.314 ********* 2025-05-28 17:31:34.663472 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:31:34.663483 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:31:34.663494 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:31:34.663505 | orchestrator | 2025-05-28 17:31:34.663515 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-05-28 17:31:34.663526 | orchestrator | Wednesday 28 May 2025 17:30:33 +0000 (0:00:02.525) 0:00:53.840 ********* 2025-05-28 17:31:34.663544 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-28 17:31:34.663554 | orchestrator | 2025-05-28 17:31:34.663565 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-05-28 17:31:34.663576 | orchestrator | Wednesday 28 May 2025 17:30:35 +0000 (0:00:01.563) 0:00:55.404 ********* 2025-05-28 17:31:34.663587 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:31:34.663597 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:31:34.663608 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:31:34.663619 | orchestrator | 2025-05-28 17:31:34.663630 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-05-28 17:31:34.663640 | orchestrator | Wednesday 28 May 2025 17:30:36 +0000 (0:00:00.880) 0:00:56.284 ********* 2025-05-28 17:31:34.663651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-28 17:31:34.663663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-28 17:31:34.663680 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-28 17:31:34.663693 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-28 17:31:34.663753 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-28 17:31:34.663766 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-28 17:31:34.663777 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-28 17:31:34.663788 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-28 17:31:34.663808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-28 17:31:34.663827 | orchestrator | 2025-05-28 17:31:34.663845 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-05-28 17:31:34.663862 | orchestrator | Wednesday 28 May 2025 17:30:45 +0000 (0:00:08.990) 0:01:05.275 ********* 2025-05-28 17:31:34.663901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-28 17:31:34.663936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-28 17:31:34.663954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-28 17:31:34.663973 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:31:34.664068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-28 17:31:34.664090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-28 17:31:34.664121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-28 17:31:34.664154 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:31:34.664183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-28 17:31:34.664203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-28 17:31:34.664224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-28 17:31:34.664244 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:31:34.664265 | orchestrator | 2025-05-28 17:31:34.664284 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-05-28 17:31:34.664304 | orchestrator | Wednesday 28 May 2025 17:30:45 +0000 (0:00:00.789) 0:01:06.064 ********* 2025-05-28 17:31:34.664325 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-28 17:31:34.664355 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-28 17:31:34.664393 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-28 17:31:34.664414 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-28 17:31:34.664435 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-28 17:31:34.664455 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-28 17:31:34.664476 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-28 17:31:34.664524 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-28 17:31:34.664553 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-28 17:31:34.664572 | orchestrator | 2025-05-28 17:31:34.664589 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-05-28 17:31:34.664600 | orchestrator | Wednesday 28 May 2025 17:30:48 +0000 (0:00:02.787) 0:01:08.852 ********* 2025-05-28 17:31:34.664615 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:31:34.664631 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:31:34.664647 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:31:34.664663 | orchestrator | 2025-05-28 17:31:34.664681 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-05-28 17:31:34.664696 | orchestrator | Wednesday 28 May 2025 17:30:49 +0000 (0:00:00.523) 0:01:09.375 ********* 2025-05-28 17:31:34.664711 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:31:34.664728 | orchestrator | 2025-05-28 17:31:34.664738 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-05-28 17:31:34.664748 | orchestrator | Wednesday 28 May 2025 17:30:51 +0000 (0:00:02.272) 0:01:11.648 ********* 2025-05-28 17:31:34.664757 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:31:34.664766 | orchestrator | 2025-05-28 17:31:34.664776 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-05-28 17:31:34.664785 | orchestrator | Wednesday 28 May 2025 17:30:53 +0000 (0:00:02.110) 0:01:13.759 ********* 2025-05-28 17:31:34.664795 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:31:34.664804 | orchestrator | 2025-05-28 17:31:34.664813 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-05-28 17:31:34.664823 | orchestrator | Wednesday 28 May 2025 17:31:05 +0000 (0:00:11.467) 0:01:25.226 ********* 2025-05-28 17:31:34.664832 | orchestrator | 2025-05-28 17:31:34.664842 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-05-28 17:31:34.664851 | orchestrator | Wednesday 28 May 2025 17:31:05 +0000 (0:00:00.148) 0:01:25.374 ********* 2025-05-28 17:31:34.664861 | orchestrator | 2025-05-28 17:31:34.664870 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-05-28 17:31:34.664879 | orchestrator | Wednesday 28 May 2025 17:31:05 +0000 (0:00:00.113) 0:01:25.488 ********* 2025-05-28 17:31:34.664889 | orchestrator | 2025-05-28 17:31:34.664898 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-05-28 17:31:34.664907 | orchestrator | Wednesday 28 May 2025 17:31:05 +0000 (0:00:00.112) 0:01:25.601 ********* 2025-05-28 17:31:34.664917 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:31:34.664926 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:31:34.664936 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:31:34.664945 | orchestrator | 2025-05-28 17:31:34.664955 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-05-28 17:31:34.664964 | orchestrator | Wednesday 28 May 2025 17:31:13 +0000 (0:00:08.402) 0:01:34.003 ********* 2025-05-28 17:31:34.665002 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:31:34.665015 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:31:34.665025 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:31:34.665034 | orchestrator | 2025-05-28 17:31:34.665044 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-05-28 17:31:34.665053 | orchestrator | Wednesday 28 May 2025 17:31:25 +0000 (0:00:11.887) 0:01:45.890 ********* 2025-05-28 17:31:34.665063 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:31:34.665072 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:31:34.665081 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:31:34.665091 | orchestrator | 2025-05-28 17:31:34.665100 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 17:31:34.665111 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-28 17:31:34.665123 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-28 17:31:34.665132 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-28 17:31:34.665142 | orchestrator | 2025-05-28 17:31:34.665151 | orchestrator | 2025-05-28 17:31:34.665161 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 17:31:34.665170 | orchestrator | Wednesday 28 May 2025 17:31:32 +0000 (0:00:07.208) 0:01:53.099 ********* 2025-05-28 17:31:34.665180 | orchestrator | =============================================================================== 2025-05-28 17:31:34.665189 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 15.58s 2025-05-28 17:31:34.665206 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 11.89s 2025-05-28 17:31:34.665236 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 11.47s 2025-05-28 17:31:34.665245 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 8.99s 2025-05-28 17:31:34.665255 | orchestrator | barbican : Restart barbican-api container ------------------------------- 8.40s 2025-05-28 17:31:34.665264 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 7.21s 2025-05-28 17:31:34.665274 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.10s 2025-05-28 17:31:34.665283 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.30s 2025-05-28 17:31:34.665292 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.01s 2025-05-28 17:31:34.665302 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.68s 2025-05-28 17:31:34.665317 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.38s 2025-05-28 17:31:34.665327 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.27s 2025-05-28 17:31:34.665336 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.20s 2025-05-28 17:31:34.665346 | orchestrator | barbican : Check barbican containers ------------------------------------ 2.79s 2025-05-28 17:31:34.665355 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.53s 2025-05-28 17:31:34.665365 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.27s 2025-05-28 17:31:34.665374 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.11s 2025-05-28 17:31:34.665383 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 1.82s 2025-05-28 17:31:34.665393 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 1.56s 2025-05-28 17:31:34.665403 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.23s 2025-05-28 17:31:34.665412 | orchestrator | 2025-05-28 17:31:34 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:31:37.699895 | orchestrator | 2025-05-28 17:31:37 | INFO  | Task e47709b4-5c24-45ff-8c5e-37701531d5ee is in state STARTED 2025-05-28 17:31:37.702333 | orchestrator | 2025-05-28 17:31:37 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:31:37.702384 | orchestrator | 2025-05-28 17:31:37 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:31:37.702687 | orchestrator | 2025-05-28 17:31:37 | INFO  | Task 6138a53e-05e9-47c4-84bf-b7ab65960f9f is in state STARTED 2025-05-28 17:31:37.702716 | orchestrator | 2025-05-28 17:31:37 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:31:40.750404 | orchestrator | 2025-05-28 17:31:40 | INFO  | Task e47709b4-5c24-45ff-8c5e-37701531d5ee is in state STARTED 2025-05-28 17:31:40.750560 | orchestrator | 2025-05-28 17:31:40 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:31:40.751131 | orchestrator | 2025-05-28 17:31:40 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:31:40.752570 | orchestrator | 2025-05-28 17:31:40 | INFO  | Task 6138a53e-05e9-47c4-84bf-b7ab65960f9f is in state STARTED 2025-05-28 17:31:40.752606 | orchestrator | 2025-05-28 17:31:40 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:31:43.800927 | orchestrator | 2025-05-28 17:31:43 | INFO  | Task e47709b4-5c24-45ff-8c5e-37701531d5ee is in state STARTED 2025-05-28 17:31:43.804627 | orchestrator | 2025-05-28 17:31:43 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:31:43.807596 | orchestrator | 2025-05-28 17:31:43 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:31:43.809909 | orchestrator | 2025-05-28 17:31:43 | INFO  | Task 6138a53e-05e9-47c4-84bf-b7ab65960f9f is in state STARTED 2025-05-28 17:31:43.810235 | orchestrator | 2025-05-28 17:31:43 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:31:46.856270 | orchestrator | 2025-05-28 17:31:46 | INFO  | Task e47709b4-5c24-45ff-8c5e-37701531d5ee is in state STARTED 2025-05-28 17:31:46.858195 | orchestrator | 2025-05-28 17:31:46 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:31:46.859860 | orchestrator | 2025-05-28 17:31:46 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:31:46.861346 | orchestrator | 2025-05-28 17:31:46 | INFO  | Task 6138a53e-05e9-47c4-84bf-b7ab65960f9f is in state STARTED 2025-05-28 17:31:46.861447 | orchestrator | 2025-05-28 17:31:46 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:31:49.918551 | orchestrator | 2025-05-28 17:31:49 | INFO  | Task e47709b4-5c24-45ff-8c5e-37701531d5ee is in state STARTED 2025-05-28 17:31:49.918627 | orchestrator | 2025-05-28 17:31:49 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:31:49.918915 | orchestrator | 2025-05-28 17:31:49 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:31:49.920021 | orchestrator | 2025-05-28 17:31:49 | INFO  | Task 6138a53e-05e9-47c4-84bf-b7ab65960f9f is in state STARTED 2025-05-28 17:31:49.920094 | orchestrator | 2025-05-28 17:31:49 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:31:52.959301 | orchestrator | 2025-05-28 17:31:52 | INFO  | Task e47709b4-5c24-45ff-8c5e-37701531d5ee is in state STARTED 2025-05-28 17:31:52.959523 | orchestrator | 2025-05-28 17:31:52 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:31:52.963047 | orchestrator | 2025-05-28 17:31:52 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:31:52.965137 | orchestrator | 2025-05-28 17:31:52 | INFO  | Task 6138a53e-05e9-47c4-84bf-b7ab65960f9f is in state STARTED 2025-05-28 17:31:52.965186 | orchestrator | 2025-05-28 17:31:52 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:31:56.004731 | orchestrator | 2025-05-28 17:31:56 | INFO  | Task e47709b4-5c24-45ff-8c5e-37701531d5ee is in state STARTED 2025-05-28 17:31:56.005556 | orchestrator | 2025-05-28 17:31:56 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:31:56.007397 | orchestrator | 2025-05-28 17:31:56 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:31:56.008386 | orchestrator | 2025-05-28 17:31:56 | INFO  | Task 6138a53e-05e9-47c4-84bf-b7ab65960f9f is in state STARTED 2025-05-28 17:31:56.008551 | orchestrator | 2025-05-28 17:31:56 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:31:59.064216 | orchestrator | 2025-05-28 17:31:59 | INFO  | Task e47709b4-5c24-45ff-8c5e-37701531d5ee is in state STARTED 2025-05-28 17:31:59.064411 | orchestrator | 2025-05-28 17:31:59 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:31:59.065343 | orchestrator | 2025-05-28 17:31:59 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:31:59.065856 | orchestrator | 2025-05-28 17:31:59 | INFO  | Task 6138a53e-05e9-47c4-84bf-b7ab65960f9f is in state STARTED 2025-05-28 17:31:59.065885 | orchestrator | 2025-05-28 17:31:59 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:32:02.116794 | orchestrator | 2025-05-28 17:32:02 | INFO  | Task e47709b4-5c24-45ff-8c5e-37701531d5ee is in state STARTED 2025-05-28 17:32:02.119233 | orchestrator | 2025-05-28 17:32:02 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:32:02.120677 | orchestrator | 2025-05-28 17:32:02 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:32:02.122597 | orchestrator | 2025-05-28 17:32:02 | INFO  | Task 6138a53e-05e9-47c4-84bf-b7ab65960f9f is in state STARTED 2025-05-28 17:32:02.123108 | orchestrator | 2025-05-28 17:32:02 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:32:05.174345 | orchestrator | 2025-05-28 17:32:05 | INFO  | Task e47709b4-5c24-45ff-8c5e-37701531d5ee is in state STARTED 2025-05-28 17:32:05.174747 | orchestrator | 2025-05-28 17:32:05 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:32:05.175900 | orchestrator | 2025-05-28 17:32:05 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:32:05.177057 | orchestrator | 2025-05-28 17:32:05 | INFO  | Task 6138a53e-05e9-47c4-84bf-b7ab65960f9f is in state STARTED 2025-05-28 17:32:05.177100 | orchestrator | 2025-05-28 17:32:05 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:32:08.220739 | orchestrator | 2025-05-28 17:32:08 | INFO  | Task e47709b4-5c24-45ff-8c5e-37701531d5ee is in state STARTED 2025-05-28 17:32:08.220868 | orchestrator | 2025-05-28 17:32:08 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:32:08.221419 | orchestrator | 2025-05-28 17:32:08 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:32:08.221868 | orchestrator | 2025-05-28 17:32:08 | INFO  | Task 6138a53e-05e9-47c4-84bf-b7ab65960f9f is in state STARTED 2025-05-28 17:32:08.221906 | orchestrator | 2025-05-28 17:32:08 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:32:11.270494 | orchestrator | 2025-05-28 17:32:11 | INFO  | Task e47709b4-5c24-45ff-8c5e-37701531d5ee is in state STARTED 2025-05-28 17:32:11.270612 | orchestrator | 2025-05-28 17:32:11 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:32:11.271112 | orchestrator | 2025-05-28 17:32:11 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:32:11.271776 | orchestrator | 2025-05-28 17:32:11 | INFO  | Task 6138a53e-05e9-47c4-84bf-b7ab65960f9f is in state STARTED 2025-05-28 17:32:11.271805 | orchestrator | 2025-05-28 17:32:11 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:32:14.295287 | orchestrator | 2025-05-28 17:32:14 | INFO  | Task e47709b4-5c24-45ff-8c5e-37701531d5ee is in state STARTED 2025-05-28 17:32:14.295485 | orchestrator | 2025-05-28 17:32:14 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:32:14.296081 | orchestrator | 2025-05-28 17:32:14 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:32:14.296661 | orchestrator | 2025-05-28 17:32:14 | INFO  | Task 6138a53e-05e9-47c4-84bf-b7ab65960f9f is in state STARTED 2025-05-28 17:32:14.296684 | orchestrator | 2025-05-28 17:32:14 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:32:17.337249 | orchestrator | 2025-05-28 17:32:17 | INFO  | Task e47709b4-5c24-45ff-8c5e-37701531d5ee is in state STARTED 2025-05-28 17:32:17.337517 | orchestrator | 2025-05-28 17:32:17 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:32:17.338851 | orchestrator | 2025-05-28 17:32:17 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:32:17.339805 | orchestrator | 2025-05-28 17:32:17 | INFO  | Task 6138a53e-05e9-47c4-84bf-b7ab65960f9f is in state SUCCESS 2025-05-28 17:32:17.339847 | orchestrator | 2025-05-28 17:32:17 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:32:20.368785 | orchestrator | 2025-05-28 17:32:20 | INFO  | Task e47709b4-5c24-45ff-8c5e-37701531d5ee is in state STARTED 2025-05-28 17:32:20.369078 | orchestrator | 2025-05-28 17:32:20 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:32:20.370187 | orchestrator | 2025-05-28 17:32:20 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:32:20.371289 | orchestrator | 2025-05-28 17:32:20 | INFO  | Task 0300d580-3d9b-4dca-a827-1744c7b46ba9 is in state STARTED 2025-05-28 17:32:20.371323 | orchestrator | 2025-05-28 17:32:20 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:32:23.407995 | orchestrator | 2025-05-28 17:32:23 | INFO  | Task e47709b4-5c24-45ff-8c5e-37701531d5ee is in state STARTED 2025-05-28 17:32:23.410325 | orchestrator | 2025-05-28 17:32:23 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:32:23.410425 | orchestrator | 2025-05-28 17:32:23 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:32:23.410440 | orchestrator | 2025-05-28 17:32:23 | INFO  | Task 0300d580-3d9b-4dca-a827-1744c7b46ba9 is in state STARTED 2025-05-28 17:32:23.410453 | orchestrator | 2025-05-28 17:32:23 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:32:26.453701 | orchestrator | 2025-05-28 17:32:26 | INFO  | Task e47709b4-5c24-45ff-8c5e-37701531d5ee is in state STARTED 2025-05-28 17:32:26.454568 | orchestrator | 2025-05-28 17:32:26 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:32:26.455855 | orchestrator | 2025-05-28 17:32:26 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:32:26.456714 | orchestrator | 2025-05-28 17:32:26 | INFO  | Task 0300d580-3d9b-4dca-a827-1744c7b46ba9 is in state STARTED 2025-05-28 17:32:26.456745 | orchestrator | 2025-05-28 17:32:26 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:32:29.492901 | orchestrator | 2025-05-28 17:32:29 | INFO  | Task e47709b4-5c24-45ff-8c5e-37701531d5ee is in state STARTED 2025-05-28 17:32:29.493052 | orchestrator | 2025-05-28 17:32:29 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:32:29.493360 | orchestrator | 2025-05-28 17:32:29 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:32:29.493885 | orchestrator | 2025-05-28 17:32:29 | INFO  | Task 0300d580-3d9b-4dca-a827-1744c7b46ba9 is in state STARTED 2025-05-28 17:32:29.493898 | orchestrator | 2025-05-28 17:32:29 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:32:32.519107 | orchestrator | 2025-05-28 17:32:32 | INFO  | Task e47709b4-5c24-45ff-8c5e-37701531d5ee is in state STARTED 2025-05-28 17:32:32.519227 | orchestrator | 2025-05-28 17:32:32 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:32:32.519616 | orchestrator | 2025-05-28 17:32:32 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:32:32.520217 | orchestrator | 2025-05-28 17:32:32 | INFO  | Task 0300d580-3d9b-4dca-a827-1744c7b46ba9 is in state STARTED 2025-05-28 17:32:32.520240 | orchestrator | 2025-05-28 17:32:32 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:32:35.552387 | orchestrator | 2025-05-28 17:32:35 | INFO  | Task e47709b4-5c24-45ff-8c5e-37701531d5ee is in state STARTED 2025-05-28 17:32:35.552638 | orchestrator | 2025-05-28 17:32:35 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:32:35.553823 | orchestrator | 2025-05-28 17:32:35 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:32:35.556742 | orchestrator | 2025-05-28 17:32:35 | INFO  | Task 0300d580-3d9b-4dca-a827-1744c7b46ba9 is in state STARTED 2025-05-28 17:32:35.557508 | orchestrator | 2025-05-28 17:32:35 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:32:38.610147 | orchestrator | 2025-05-28 17:32:38 | INFO  | Task e47709b4-5c24-45ff-8c5e-37701531d5ee is in state STARTED 2025-05-28 17:32:38.610720 | orchestrator | 2025-05-28 17:32:38 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:32:38.611923 | orchestrator | 2025-05-28 17:32:38 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:32:38.613306 | orchestrator | 2025-05-28 17:32:38 | INFO  | Task 0300d580-3d9b-4dca-a827-1744c7b46ba9 is in state STARTED 2025-05-28 17:32:38.613349 | orchestrator | 2025-05-28 17:32:38 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:32:41.667691 | orchestrator | 2025-05-28 17:32:41 | INFO  | Task e47709b4-5c24-45ff-8c5e-37701531d5ee is in state STARTED 2025-05-28 17:32:41.669164 | orchestrator | 2025-05-28 17:32:41 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:32:41.671241 | orchestrator | 2025-05-28 17:32:41 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:32:41.673700 | orchestrator | 2025-05-28 17:32:41 | INFO  | Task 0300d580-3d9b-4dca-a827-1744c7b46ba9 is in state STARTED 2025-05-28 17:32:41.673730 | orchestrator | 2025-05-28 17:32:41 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:32:44.729883 | orchestrator | 2025-05-28 17:32:44 | INFO  | Task e47709b4-5c24-45ff-8c5e-37701531d5ee is in state STARTED 2025-05-28 17:32:44.730343 | orchestrator | 2025-05-28 17:32:44 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:32:44.731094 | orchestrator | 2025-05-28 17:32:44 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:32:44.732141 | orchestrator | 2025-05-28 17:32:44 | INFO  | Task 0300d580-3d9b-4dca-a827-1744c7b46ba9 is in state STARTED 2025-05-28 17:32:44.732204 | orchestrator | 2025-05-28 17:32:44 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:32:47.774291 | orchestrator | 2025-05-28 17:32:47 | INFO  | Task e47709b4-5c24-45ff-8c5e-37701531d5ee is in state STARTED 2025-05-28 17:32:47.775833 | orchestrator | 2025-05-28 17:32:47 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:32:47.777531 | orchestrator | 2025-05-28 17:32:47 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:32:47.779103 | orchestrator | 2025-05-28 17:32:47 | INFO  | Task 0300d580-3d9b-4dca-a827-1744c7b46ba9 is in state STARTED 2025-05-28 17:32:47.779143 | orchestrator | 2025-05-28 17:32:47 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:32:50.826067 | orchestrator | 2025-05-28 17:32:50 | INFO  | Task e47709b4-5c24-45ff-8c5e-37701531d5ee is in state STARTED 2025-05-28 17:32:50.827061 | orchestrator | 2025-05-28 17:32:50 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:32:50.828299 | orchestrator | 2025-05-28 17:32:50 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:32:50.829452 | orchestrator | 2025-05-28 17:32:50 | INFO  | Task 0300d580-3d9b-4dca-a827-1744c7b46ba9 is in state STARTED 2025-05-28 17:32:50.829475 | orchestrator | 2025-05-28 17:32:50 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:32:53.875650 | orchestrator | 2025-05-28 17:32:53 | INFO  | Task e47709b4-5c24-45ff-8c5e-37701531d5ee is in state STARTED 2025-05-28 17:32:53.876146 | orchestrator | 2025-05-28 17:32:53 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:32:53.876169 | orchestrator | 2025-05-28 17:32:53 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:32:53.877286 | orchestrator | 2025-05-28 17:32:53 | INFO  | Task 0300d580-3d9b-4dca-a827-1744c7b46ba9 is in state STARTED 2025-05-28 17:32:53.877376 | orchestrator | 2025-05-28 17:32:53 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:32:56.922901 | orchestrator | 2025-05-28 17:32:56 | INFO  | Task e47709b4-5c24-45ff-8c5e-37701531d5ee is in state STARTED 2025-05-28 17:32:56.923234 | orchestrator | 2025-05-28 17:32:56 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:32:56.927340 | orchestrator | 2025-05-28 17:32:56 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:32:56.928413 | orchestrator | 2025-05-28 17:32:56 | INFO  | Task 0300d580-3d9b-4dca-a827-1744c7b46ba9 is in state STARTED 2025-05-28 17:32:56.928501 | orchestrator | 2025-05-28 17:32:56 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:32:59.963007 | orchestrator | 2025-05-28 17:32:59 | INFO  | Task e47709b4-5c24-45ff-8c5e-37701531d5ee is in state STARTED 2025-05-28 17:32:59.963746 | orchestrator | 2025-05-28 17:32:59 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:32:59.964983 | orchestrator | 2025-05-28 17:32:59 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:32:59.966153 | orchestrator | 2025-05-28 17:32:59 | INFO  | Task 0300d580-3d9b-4dca-a827-1744c7b46ba9 is in state STARTED 2025-05-28 17:32:59.966181 | orchestrator | 2025-05-28 17:32:59 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:33:03.006449 | orchestrator | 2025-05-28 17:33:03 | INFO  | Task e47709b4-5c24-45ff-8c5e-37701531d5ee is in state STARTED 2025-05-28 17:33:03.007500 | orchestrator | 2025-05-28 17:33:03 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:33:03.009111 | orchestrator | 2025-05-28 17:33:03 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:33:03.014843 | orchestrator | 2025-05-28 17:33:03 | INFO  | Task 0300d580-3d9b-4dca-a827-1744c7b46ba9 is in state STARTED 2025-05-28 17:33:03.014932 | orchestrator | 2025-05-28 17:33:03 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:33:06.068569 | orchestrator | 2025-05-28 17:33:06 | INFO  | Task e47709b4-5c24-45ff-8c5e-37701531d5ee is in state STARTED 2025-05-28 17:33:06.069959 | orchestrator | 2025-05-28 17:33:06 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:33:06.071977 | orchestrator | 2025-05-28 17:33:06 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:33:06.073596 | orchestrator | 2025-05-28 17:33:06 | INFO  | Task 0300d580-3d9b-4dca-a827-1744c7b46ba9 is in state STARTED 2025-05-28 17:33:06.073643 | orchestrator | 2025-05-28 17:33:06 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:33:09.119974 | orchestrator | 2025-05-28 17:33:09 | INFO  | Task e47709b4-5c24-45ff-8c5e-37701531d5ee is in state STARTED 2025-05-28 17:33:09.121016 | orchestrator | 2025-05-28 17:33:09 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:33:09.122476 | orchestrator | 2025-05-28 17:33:09 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:33:09.125518 | orchestrator | 2025-05-28 17:33:09 | INFO  | Task 0300d580-3d9b-4dca-a827-1744c7b46ba9 is in state STARTED 2025-05-28 17:33:09.125871 | orchestrator | 2025-05-28 17:33:09 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:33:12.193844 | orchestrator | 2025-05-28 17:33:12 | INFO  | Task e47709b4-5c24-45ff-8c5e-37701531d5ee is in state SUCCESS 2025-05-28 17:33:12.195647 | orchestrator | 2025-05-28 17:33:12.195691 | orchestrator | 2025-05-28 17:33:12.195704 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-05-28 17:33:12.195716 | orchestrator | 2025-05-28 17:33:12.195728 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-05-28 17:33:12.195740 | orchestrator | Wednesday 28 May 2025 17:31:37 +0000 (0:00:00.088) 0:00:00.088 ********* 2025-05-28 17:33:12.195751 | orchestrator | changed: [localhost] 2025-05-28 17:33:12.195856 | orchestrator | 2025-05-28 17:33:12.195868 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-05-28 17:33:12.196316 | orchestrator | Wednesday 28 May 2025 17:31:37 +0000 (0:00:00.762) 0:00:00.850 ********* 2025-05-28 17:33:12.196330 | orchestrator | changed: [localhost] 2025-05-28 17:33:12.196341 | orchestrator | 2025-05-28 17:33:12.196352 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-05-28 17:33:12.196363 | orchestrator | Wednesday 28 May 2025 17:32:11 +0000 (0:00:33.795) 0:00:34.646 ********* 2025-05-28 17:33:12.196373 | orchestrator | changed: [localhost] 2025-05-28 17:33:12.196384 | orchestrator | 2025-05-28 17:33:12.196394 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-28 17:33:12.196405 | orchestrator | 2025-05-28 17:33:12.196415 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-28 17:33:12.196499 | orchestrator | Wednesday 28 May 2025 17:32:16 +0000 (0:00:04.251) 0:00:38.898 ********* 2025-05-28 17:33:12.196515 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:33:12.196526 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:33:12.196536 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:33:12.196547 | orchestrator | 2025-05-28 17:33:12.196862 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-28 17:33:12.196878 | orchestrator | Wednesday 28 May 2025 17:32:16 +0000 (0:00:00.274) 0:00:39.173 ********* 2025-05-28 17:33:12.196889 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-05-28 17:33:12.196943 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-05-28 17:33:12.196994 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-05-28 17:33:12.197006 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-05-28 17:33:12.197017 | orchestrator | 2025-05-28 17:33:12.197027 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-05-28 17:33:12.197038 | orchestrator | skipping: no hosts matched 2025-05-28 17:33:12.197318 | orchestrator | 2025-05-28 17:33:12.197349 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 17:33:12.197361 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 17:33:12.197373 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 17:33:12.197386 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 17:33:12.197397 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 17:33:12.197407 | orchestrator | 2025-05-28 17:33:12.197418 | orchestrator | 2025-05-28 17:33:12.197429 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 17:33:12.197440 | orchestrator | Wednesday 28 May 2025 17:32:16 +0000 (0:00:00.574) 0:00:39.747 ********* 2025-05-28 17:33:12.197451 | orchestrator | =============================================================================== 2025-05-28 17:33:12.197462 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 33.80s 2025-05-28 17:33:12.197472 | orchestrator | Download ironic-agent kernel -------------------------------------------- 4.25s 2025-05-28 17:33:12.197483 | orchestrator | Ensure the destination directory exists --------------------------------- 0.76s 2025-05-28 17:33:12.197493 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.57s 2025-05-28 17:33:12.197504 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.27s 2025-05-28 17:33:12.197514 | orchestrator | 2025-05-28 17:33:12.197525 | orchestrator | 2025-05-28 17:33:12.197535 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-28 17:33:12.197546 | orchestrator | 2025-05-28 17:33:12.197556 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-28 17:33:12.197567 | orchestrator | Wednesday 28 May 2025 17:30:10 +0000 (0:00:00.479) 0:00:00.479 ********* 2025-05-28 17:33:12.197577 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:33:12.197588 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:33:12.197599 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:33:12.197609 | orchestrator | 2025-05-28 17:33:12.197620 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-28 17:33:12.197630 | orchestrator | Wednesday 28 May 2025 17:30:10 +0000 (0:00:00.367) 0:00:00.847 ********* 2025-05-28 17:33:12.197641 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-05-28 17:33:12.197651 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-05-28 17:33:12.197662 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-05-28 17:33:12.197672 | orchestrator | 2025-05-28 17:33:12.197683 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-05-28 17:33:12.197693 | orchestrator | 2025-05-28 17:33:12.197704 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-28 17:33:12.197714 | orchestrator | Wednesday 28 May 2025 17:30:11 +0000 (0:00:00.662) 0:00:01.509 ********* 2025-05-28 17:33:12.197725 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:33:12.197736 | orchestrator | 2025-05-28 17:33:12.197746 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-05-28 17:33:12.197757 | orchestrator | Wednesday 28 May 2025 17:30:12 +0000 (0:00:00.601) 0:00:02.110 ********* 2025-05-28 17:33:12.197820 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-05-28 17:33:12.197834 | orchestrator | 2025-05-28 17:33:12.197845 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-05-28 17:33:12.197855 | orchestrator | Wednesday 28 May 2025 17:30:15 +0000 (0:00:03.336) 0:00:05.446 ********* 2025-05-28 17:33:12.197866 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-05-28 17:33:12.197877 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-05-28 17:33:12.197888 | orchestrator | 2025-05-28 17:33:12.197918 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-05-28 17:33:12.197929 | orchestrator | Wednesday 28 May 2025 17:30:21 +0000 (0:00:06.369) 0:00:11.816 ********* 2025-05-28 17:33:12.197940 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-28 17:33:12.197951 | orchestrator | 2025-05-28 17:33:12.197963 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-05-28 17:33:12.197975 | orchestrator | Wednesday 28 May 2025 17:30:25 +0000 (0:00:03.293) 0:00:15.109 ********* 2025-05-28 17:33:12.197987 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-28 17:33:12.197999 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-05-28 17:33:12.198010 | orchestrator | 2025-05-28 17:33:12.198064 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-05-28 17:33:12.198077 | orchestrator | Wednesday 28 May 2025 17:30:29 +0000 (0:00:03.838) 0:00:18.948 ********* 2025-05-28 17:33:12.198089 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-28 17:33:12.198101 | orchestrator | 2025-05-28 17:33:12.198113 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-05-28 17:33:12.198125 | orchestrator | Wednesday 28 May 2025 17:30:32 +0000 (0:00:03.560) 0:00:22.509 ********* 2025-05-28 17:33:12.198138 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-05-28 17:33:12.198149 | orchestrator | 2025-05-28 17:33:12.198161 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-05-28 17:33:12.198173 | orchestrator | Wednesday 28 May 2025 17:30:36 +0000 (0:00:03.674) 0:00:26.183 ********* 2025-05-28 17:33:12.198195 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-28 17:33:12.198212 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-28 17:33:12.198273 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-28 17:33:12.198288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-28 17:33:12.198303 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-28 17:33:12.198321 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-28 17:33:12.198333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-28 17:33:12.198344 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-28 17:33:12.198362 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-28 17:33:12.198404 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-28 17:33:12.198418 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-28 17:33:12.198430 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-28 17:33:12.199571 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-28 17:33:12.199603 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-28 17:33:12.199614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-28 17:33:12.199637 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-28 17:33:12.199681 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-28 17:33:12.199693 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-28 17:33:12.199704 | orchestrator | 2025-05-28 17:33:12.199714 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-05-28 17:33:12.199725 | orchestrator | Wednesday 28 May 2025 17:30:39 +0000 (0:00:03.474) 0:00:29.658 ********* 2025-05-28 17:33:12.199735 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:33:12.199745 | orchestrator | 2025-05-28 17:33:12.199755 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-05-28 17:33:12.199765 | orchestrator | Wednesday 28 May 2025 17:30:39 +0000 (0:00:00.186) 0:00:29.844 ********* 2025-05-28 17:33:12.199775 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:33:12.199785 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:33:12.199795 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:33:12.199805 | orchestrator | 2025-05-28 17:33:12.199859 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-28 17:33:12.199872 | orchestrator | Wednesday 28 May 2025 17:30:40 +0000 (0:00:00.694) 0:00:30.538 ********* 2025-05-28 17:33:12.199961 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:33:12.199978 | orchestrator | 2025-05-28 17:33:12.199988 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-05-28 17:33:12.199998 | orchestrator | Wednesday 28 May 2025 17:30:41 +0000 (0:00:01.022) 0:00:31.561 ********* 2025-05-28 17:33:12.200008 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-28 17:33:12.200027 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-28 17:33:12.200065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-28 17:33:12.200078 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-28 17:33:12.200088 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-28 17:33:12.200103 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-28 17:33:12.200119 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-28 17:33:12.200129 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-28 17:33:12.200139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-28 17:33:12.200174 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-28 17:33:12.200186 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-28 17:33:12.200201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-28 17:33:12.200212 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-28 17:33:12.200233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-28 17:33:12.200243 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-28 17:33:12.200253 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-28 17:33:12.200288 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-28 17:33:12.200300 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-28 17:33:12.200310 | orchestrator | 2025-05-28 17:33:12.200320 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-05-28 17:33:12.200329 | orchestrator | Wednesday 28 May 2025 17:30:47 +0000 (0:00:06.360) 0:00:37.921 ********* 2025-05-28 17:33:12.200344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-28 17:33:12.200360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-28 17:33:12.200370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-28 17:33:12.200380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-28 17:33:12.200415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-28 17:33:12.200426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-28 17:33:12.200436 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:33:12.200451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-28 17:33:12.200467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-28 17:33:12.200477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-28 17:33:12.200487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-28 17:33:12.200523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-28 17:33:12.200535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-28 17:33:12.200545 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:33:12.200578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-28 17:33:12.200596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-28 17:33:12.200607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-28 17:33:12.200619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-28 17:33:12.200658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-28 17:33:12.200670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-28 17:33:12.200681 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:33:12.200692 | orchestrator | 2025-05-28 17:33:12.200703 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-05-28 17:33:12.200714 | orchestrator | Wednesday 28 May 2025 17:30:49 +0000 (0:00:01.301) 0:00:39.223 ********* 2025-05-28 17:33:12.200730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-28 17:33:12.200747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-28 17:33:12.200759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-28 17:33:12.200771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-28 17:33:12.200808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-28 17:33:12.200820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-28 17:33:12.200831 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:33:12.200852 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-28 17:33:12.200865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-28 17:33:12.200876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-28 17:33:12.200887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-28 17:33:12.200975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-28 17:33:12.200988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-28 17:33:12.200998 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:33:12.201015 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-28 17:33:12.201044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-28 17:33:12.201055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-28 17:33:12.201065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-28 17:33:12.201075 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-28 17:33:12.201112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-28 17:33:12.201131 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:33:12.201140 | orchestrator | 2025-05-28 17:33:12.201150 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-05-28 17:33:12.201160 | orchestrator | Wednesday 28 May 2025 17:30:51 +0000 (0:00:02.271) 0:00:41.494 ********* 2025-05-28 17:33:12.201170 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-28 17:33:12.201184 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-28 17:33:12.201195 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-28 17:33:12.201205 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-28 17:33:12.201239 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-28 17:33:12.201260 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-28 17:33:12.201274 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-28 17:33:12.201285 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-28 17:33:12.201295 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-28 17:33:12.201305 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-28 17:33:12.201315 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-28 17:33:12.201350 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-28 17:33:12.201367 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-28 17:33:12.201381 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-28 17:33:12.201390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-28 17:33:12.201398 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-28 17:33:12.201406 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-28 17:33:12.201415 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-28 17:33:12.201427 | orchestrator | 2025-05-28 17:33:12.201455 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-05-28 17:33:12.201465 | orchestrator | Wednesday 28 May 2025 17:30:58 +0000 (0:00:06.734) 0:00:48.228 ********* 2025-05-28 17:33:12.201474 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-28 17:33:12.201485 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-28 17:33:12.201494 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-28 17:33:12.201502 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-28 17:33:12.201510 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-28 17:33:12.201527 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-28 17:33:12.201536 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-28 17:33:12.201547 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-28 17:33:12.201556 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-28 17:33:12.201564 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-28 17:33:12.201572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-28 17:33:12.201590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-28 17:33:12.201599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-28 17:33:12.201607 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-28 17:33:12.201628 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-28 17:33:12.201637 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-28 17:33:12.201646 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-28 17:33:12.201654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-28 17:33:12.201667 | orchestrator | 2025-05-28 17:33:12.201675 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-05-28 17:33:12.201683 | orchestrator | Wednesday 28 May 2025 17:31:18 +0000 (0:00:20.586) 0:01:08.815 ********* 2025-05-28 17:33:12.201691 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-05-28 17:33:12.201699 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-05-28 17:33:12.201707 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-05-28 17:33:12.201715 | orchestrator | 2025-05-28 17:33:12.201727 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-05-28 17:33:12.201735 | orchestrator | Wednesday 28 May 2025 17:31:25 +0000 (0:00:06.413) 0:01:15.229 ********* 2025-05-28 17:33:12.201743 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-05-28 17:33:12.201751 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-05-28 17:33:12.201758 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-05-28 17:33:12.201766 | orchestrator | 2025-05-28 17:33:12.201774 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-05-28 17:33:12.201782 | orchestrator | Wednesday 28 May 2025 17:31:28 +0000 (0:00:03.633) 0:01:18.863 ********* 2025-05-28 17:33:12.201790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-28 17:33:12.201802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-28 17:33:12.201811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-28 17:33:12.201824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-28 17:33:12.201836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-28 17:33:12.201845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-28 17:33:12.201853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-28 17:33:12.201864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-28 17:33:12.201873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-28 17:33:12.201885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-28 17:33:12.201905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-28 17:33:12.201919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-28 17:33:12.201928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-28 17:33:12.201941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-28 17:33:12.201950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-28 17:33:12.201958 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-28 17:33:12.201971 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-28 17:33:12.201979 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-28 17:33:12.201988 | orchestrator | 2025-05-28 17:33:12.201999 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-05-28 17:33:12.202007 | orchestrator | Wednesday 28 May 2025 17:31:31 +0000 (0:00:02.977) 0:01:21.841 ********* 2025-05-28 17:33:12.202039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-28 17:33:12.202053 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-28 17:33:12.202062 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-28 17:33:12.202075 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-28 17:33:12.202083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-28 17:33:12.202097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-28 17:33:12.202105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-28 17:33:12.202127 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-28 17:33:12.202136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-28 17:33:12.202149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-28 17:33:12.202157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-28 17:33:12.202169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-28 17:33:12.202178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-28 17:33:12.202186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-28 17:33:12.202198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-28 17:33:12.202207 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-28 17:33:12.202231 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-28 17:33:12.202240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-28 17:33:12.202248 | orchestrator | 2025-05-28 17:33:12.202256 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-28 17:33:12.202264 | orchestrator | Wednesday 28 May 2025 17:31:34 +0000 (0:00:02.362) 0:01:24.203 ********* 2025-05-28 17:33:12.202272 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:33:12.202280 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:33:12.202288 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:33:12.202296 | orchestrator | 2025-05-28 17:33:12.202303 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-05-28 17:33:12.202311 | orchestrator | Wednesday 28 May 2025 17:31:34 +0000 (0:00:00.392) 0:01:24.595 ********* 2025-05-28 17:33:12.202324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-28 17:33:12.202333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-28 17:33:12.202349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-28 17:33:12.202358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-28 17:33:12.202367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-28 17:33:12.202375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-28 17:33:12.202383 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:33:12.202395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-28 17:33:12.202404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-28 17:33:12.202420 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-28 17:33:12.202428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-28 17:33:12.202436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-28 17:33:12.202444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-28 17:33:12.202452 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:33:12.202465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-28 17:33:12.202473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-28 17:33:12.202489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-28 17:33:12.202498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-28 17:33:12.202506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-28 17:33:12.202514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-28 17:33:12.202522 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:33:12.202530 | orchestrator | 2025-05-28 17:33:12.202538 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-05-28 17:33:12.202546 | orchestrator | Wednesday 28 May 2025 17:31:35 +0000 (0:00:00.889) 0:01:25.484 ********* 2025-05-28 17:33:12.202558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-28 17:33:12.202566 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-28 17:33:12.202588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-28 17:33:12.202596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-28 17:33:12.202605 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-28 17:33:12.202617 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-28 17:33:12.202626 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-28 17:33:12.202639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-28 17:33:12.202662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-28 17:33:12.202671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-28 17:33:12.202679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-28 17:33:12.202688 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-28 17:33:12.202699 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-28 17:33:12.202708 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-28 17:33:12.202721 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-28 17:33:12.202733 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-28 17:33:12.202741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-28 17:33:12.202750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-28 17:33:12.202758 | orchestrator | 2025-05-28 17:33:12.202766 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-28 17:33:12.202774 | orchestrator | Wednesday 28 May 2025 17:31:40 +0000 (0:00:04.484) 0:01:29.969 ********* 2025-05-28 17:33:12.202782 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:33:12.202790 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:33:12.202798 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:33:12.202805 | orchestrator | 2025-05-28 17:33:12.202813 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-05-28 17:33:12.202821 | orchestrator | Wednesday 28 May 2025 17:31:40 +0000 (0:00:00.284) 0:01:30.254 ********* 2025-05-28 17:33:12.202829 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-05-28 17:33:12.202837 | orchestrator | 2025-05-28 17:33:12.202845 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-05-28 17:33:12.202853 | orchestrator | Wednesday 28 May 2025 17:31:42 +0000 (0:00:02.573) 0:01:32.827 ********* 2025-05-28 17:33:12.202860 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-28 17:33:12.202868 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-05-28 17:33:12.202876 | orchestrator | 2025-05-28 17:33:12.202884 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-05-28 17:33:12.202911 | orchestrator | Wednesday 28 May 2025 17:31:44 +0000 (0:00:02.059) 0:01:34.887 ********* 2025-05-28 17:33:12.202919 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:33:12.202927 | orchestrator | 2025-05-28 17:33:12.202935 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-05-28 17:33:12.202946 | orchestrator | Wednesday 28 May 2025 17:32:02 +0000 (0:00:18.039) 0:01:52.926 ********* 2025-05-28 17:33:12.202955 | orchestrator | 2025-05-28 17:33:12.202962 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-05-28 17:33:12.202970 | orchestrator | Wednesday 28 May 2025 17:32:03 +0000 (0:00:00.074) 0:01:53.001 ********* 2025-05-28 17:33:12.202978 | orchestrator | 2025-05-28 17:33:12.202986 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-05-28 17:33:12.202994 | orchestrator | Wednesday 28 May 2025 17:32:03 +0000 (0:00:00.081) 0:01:53.082 ********* 2025-05-28 17:33:12.203001 | orchestrator | 2025-05-28 17:33:12.203009 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-05-28 17:33:12.203017 | orchestrator | Wednesday 28 May 2025 17:32:03 +0000 (0:00:00.069) 0:01:53.151 ********* 2025-05-28 17:33:12.203025 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:33:12.203033 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:33:12.203040 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:33:12.203048 | orchestrator | 2025-05-28 17:33:12.203056 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-05-28 17:33:12.203064 | orchestrator | Wednesday 28 May 2025 17:32:12 +0000 (0:00:08.997) 0:02:02.149 ********* 2025-05-28 17:33:12.203071 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:33:12.203079 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:33:12.203087 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:33:12.203095 | orchestrator | 2025-05-28 17:33:12.203102 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-05-28 17:33:12.203110 | orchestrator | Wednesday 28 May 2025 17:32:24 +0000 (0:00:12.064) 0:02:14.213 ********* 2025-05-28 17:33:12.203118 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:33:12.203126 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:33:12.203134 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:33:12.203142 | orchestrator | 2025-05-28 17:33:12.203149 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-05-28 17:33:12.203157 | orchestrator | Wednesday 28 May 2025 17:32:31 +0000 (0:00:07.650) 0:02:21.864 ********* 2025-05-28 17:33:12.203165 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:33:12.203173 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:33:12.203180 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:33:12.203188 | orchestrator | 2025-05-28 17:33:12.203196 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-05-28 17:33:12.203207 | orchestrator | Wednesday 28 May 2025 17:32:44 +0000 (0:00:12.340) 0:02:34.205 ********* 2025-05-28 17:33:12.203215 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:33:12.203223 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:33:12.203231 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:33:12.203239 | orchestrator | 2025-05-28 17:33:12.203247 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-05-28 17:33:12.203255 | orchestrator | Wednesday 28 May 2025 17:32:51 +0000 (0:00:07.110) 0:02:41.315 ********* 2025-05-28 17:33:12.203262 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:33:12.203270 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:33:12.203278 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:33:12.203286 | orchestrator | 2025-05-28 17:33:12.203293 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-05-28 17:33:12.203301 | orchestrator | Wednesday 28 May 2025 17:33:01 +0000 (0:00:09.697) 0:02:51.013 ********* 2025-05-28 17:33:12.203309 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:33:12.203317 | orchestrator | 2025-05-28 17:33:12.203325 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 17:33:12.203338 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-28 17:33:12.203347 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-28 17:33:12.203354 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-28 17:33:12.203362 | orchestrator | 2025-05-28 17:33:12.203370 | orchestrator | 2025-05-28 17:33:12.203378 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 17:33:12.203386 | orchestrator | Wednesday 28 May 2025 17:33:08 +0000 (0:00:07.791) 0:02:58.805 ********* 2025-05-28 17:33:12.203394 | orchestrator | =============================================================================== 2025-05-28 17:33:12.203401 | orchestrator | designate : Copying over designate.conf -------------------------------- 20.59s 2025-05-28 17:33:12.203409 | orchestrator | designate : Running Designate bootstrap container ---------------------- 18.04s 2025-05-28 17:33:12.203417 | orchestrator | designate : Restart designate-producer container ----------------------- 12.34s 2025-05-28 17:33:12.203425 | orchestrator | designate : Restart designate-api container ---------------------------- 12.06s 2025-05-28 17:33:12.203432 | orchestrator | designate : Restart designate-worker container -------------------------- 9.70s 2025-05-28 17:33:12.203440 | orchestrator | designate : Restart designate-backend-bind9 container ------------------- 9.00s 2025-05-28 17:33:12.203448 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.79s 2025-05-28 17:33:12.203456 | orchestrator | designate : Restart designate-central container ------------------------- 7.65s 2025-05-28 17:33:12.203464 | orchestrator | designate : Restart designate-mdns container ---------------------------- 7.11s 2025-05-28 17:33:12.203471 | orchestrator | designate : Copying over config.json files for services ----------------- 6.73s 2025-05-28 17:33:12.203479 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 6.41s 2025-05-28 17:33:12.203487 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.37s 2025-05-28 17:33:12.203495 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.36s 2025-05-28 17:33:12.203506 | orchestrator | designate : Check designate containers ---------------------------------- 4.49s 2025-05-28 17:33:12.203514 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.84s 2025-05-28 17:33:12.203521 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 3.67s 2025-05-28 17:33:12.203529 | orchestrator | designate : Copying over named.conf ------------------------------------- 3.63s 2025-05-28 17:33:12.203537 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.56s 2025-05-28 17:33:12.203545 | orchestrator | designate : Ensuring config directories exist --------------------------- 3.47s 2025-05-28 17:33:12.203552 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.34s 2025-05-28 17:33:12.203560 | orchestrator | 2025-05-28 17:33:12 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:33:12.203568 | orchestrator | 2025-05-28 17:33:12 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:33:12.203576 | orchestrator | 2025-05-28 17:33:12 | INFO  | Task a0f9f78f-e6a3-424a-b187-135b859fe70c is in state STARTED 2025-05-28 17:33:12.203584 | orchestrator | 2025-05-28 17:33:12 | INFO  | Task 0300d580-3d9b-4dca-a827-1744c7b46ba9 is in state STARTED 2025-05-28 17:33:12.203592 | orchestrator | 2025-05-28 17:33:12 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:33:15.260493 | orchestrator | 2025-05-28 17:33:15 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:33:15.262288 | orchestrator | 2025-05-28 17:33:15 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:33:15.264225 | orchestrator | 2025-05-28 17:33:15 | INFO  | Task a0f9f78f-e6a3-424a-b187-135b859fe70c is in state STARTED 2025-05-28 17:33:15.266209 | orchestrator | 2025-05-28 17:33:15 | INFO  | Task 0300d580-3d9b-4dca-a827-1744c7b46ba9 is in state STARTED 2025-05-28 17:33:15.266266 | orchestrator | 2025-05-28 17:33:15 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:33:18.314598 | orchestrator | 2025-05-28 17:33:18 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:33:18.316373 | orchestrator | 2025-05-28 17:33:18 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:33:18.317967 | orchestrator | 2025-05-28 17:33:18 | INFO  | Task a0f9f78f-e6a3-424a-b187-135b859fe70c is in state STARTED 2025-05-28 17:33:18.320128 | orchestrator | 2025-05-28 17:33:18 | INFO  | Task 0300d580-3d9b-4dca-a827-1744c7b46ba9 is in state STARTED 2025-05-28 17:33:18.320226 | orchestrator | 2025-05-28 17:33:18 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:33:21.368984 | orchestrator | 2025-05-28 17:33:21 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:33:21.371346 | orchestrator | 2025-05-28 17:33:21 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:33:21.373331 | orchestrator | 2025-05-28 17:33:21 | INFO  | Task a0f9f78f-e6a3-424a-b187-135b859fe70c is in state STARTED 2025-05-28 17:33:21.374252 | orchestrator | 2025-05-28 17:33:21 | INFO  | Task 0300d580-3d9b-4dca-a827-1744c7b46ba9 is in state STARTED 2025-05-28 17:33:21.374278 | orchestrator | 2025-05-28 17:33:21 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:33:24.421644 | orchestrator | 2025-05-28 17:33:24 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state STARTED 2025-05-28 17:33:24.422497 | orchestrator | 2025-05-28 17:33:24 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:33:24.422713 | orchestrator | 2025-05-28 17:33:24 | INFO  | Task a0f9f78f-e6a3-424a-b187-135b859fe70c is in state STARTED 2025-05-28 17:33:24.423401 | orchestrator | 2025-05-28 17:33:24 | INFO  | Task 0300d580-3d9b-4dca-a827-1744c7b46ba9 is in state STARTED 2025-05-28 17:33:24.423431 | orchestrator | 2025-05-28 17:33:24 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:33:27.474618 | orchestrator | 2025-05-28 17:33:27 | INFO  | Task ef03f1a9-5db9-4885-b01e-f3b1509baaf4 is in state STARTED 2025-05-28 17:33:27.474742 | orchestrator | 2025-05-28 17:33:27 | INFO  | Task c684425b-393e-4d04-8709-16507c816940 is in state SUCCESS 2025-05-28 17:33:27.477064 | orchestrator | 2025-05-28 17:33:27.477100 | orchestrator | 2025-05-28 17:33:27.477112 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-28 17:33:27.477125 | orchestrator | 2025-05-28 17:33:27.477136 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-28 17:33:27.477148 | orchestrator | Wednesday 28 May 2025 17:29:08 +0000 (0:00:00.266) 0:00:00.266 ********* 2025-05-28 17:33:27.477159 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:33:27.477171 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:33:27.477182 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:33:27.477193 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:33:27.477204 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:33:27.477214 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:33:27.477225 | orchestrator | 2025-05-28 17:33:27.477236 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-28 17:33:27.477247 | orchestrator | Wednesday 28 May 2025 17:29:08 +0000 (0:00:00.661) 0:00:00.928 ********* 2025-05-28 17:33:27.477258 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-05-28 17:33:27.477269 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-05-28 17:33:27.477308 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-05-28 17:33:27.477319 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-05-28 17:33:27.477330 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-05-28 17:33:27.477426 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-05-28 17:33:27.477968 | orchestrator | 2025-05-28 17:33:27.477996 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-05-28 17:33:27.478008 | orchestrator | 2025-05-28 17:33:27.478080 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-28 17:33:27.478094 | orchestrator | Wednesday 28 May 2025 17:29:09 +0000 (0:00:00.587) 0:00:01.515 ********* 2025-05-28 17:33:27.478107 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 17:33:27.478120 | orchestrator | 2025-05-28 17:33:27.478131 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-05-28 17:33:27.478141 | orchestrator | Wednesday 28 May 2025 17:29:10 +0000 (0:00:01.270) 0:00:02.786 ********* 2025-05-28 17:33:27.478152 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:33:27.478163 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:33:27.478174 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:33:27.478184 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:33:27.478195 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:33:27.478206 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:33:27.478216 | orchestrator | 2025-05-28 17:33:27.478227 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-05-28 17:33:27.478238 | orchestrator | Wednesday 28 May 2025 17:29:11 +0000 (0:00:01.277) 0:00:04.063 ********* 2025-05-28 17:33:27.478249 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:33:27.478260 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:33:27.478271 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:33:27.478281 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:33:27.478292 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:33:27.478321 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:33:27.478332 | orchestrator | 2025-05-28 17:33:27.478343 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-05-28 17:33:27.478353 | orchestrator | Wednesday 28 May 2025 17:29:13 +0000 (0:00:01.128) 0:00:05.192 ********* 2025-05-28 17:33:27.478364 | orchestrator | ok: [testbed-node-0] => { 2025-05-28 17:33:27.478375 | orchestrator |  "changed": false, 2025-05-28 17:33:27.478386 | orchestrator |  "msg": "All assertions passed" 2025-05-28 17:33:27.478397 | orchestrator | } 2025-05-28 17:33:27.478408 | orchestrator | ok: [testbed-node-1] => { 2025-05-28 17:33:27.478419 | orchestrator |  "changed": false, 2025-05-28 17:33:27.478430 | orchestrator |  "msg": "All assertions passed" 2025-05-28 17:33:27.478440 | orchestrator | } 2025-05-28 17:33:27.478451 | orchestrator | ok: [testbed-node-2] => { 2025-05-28 17:33:27.478461 | orchestrator |  "changed": false, 2025-05-28 17:33:27.478472 | orchestrator |  "msg": "All assertions passed" 2025-05-28 17:33:27.478483 | orchestrator | } 2025-05-28 17:33:27.478493 | orchestrator | ok: [testbed-node-3] => { 2025-05-28 17:33:27.478504 | orchestrator |  "changed": false, 2025-05-28 17:33:27.478514 | orchestrator |  "msg": "All assertions passed" 2025-05-28 17:33:27.478525 | orchestrator | } 2025-05-28 17:33:27.478535 | orchestrator | ok: [testbed-node-4] => { 2025-05-28 17:33:27.478546 | orchestrator |  "changed": false, 2025-05-28 17:33:27.478556 | orchestrator |  "msg": "All assertions passed" 2025-05-28 17:33:27.478567 | orchestrator | } 2025-05-28 17:33:27.478577 | orchestrator | ok: [testbed-node-5] => { 2025-05-28 17:33:27.478588 | orchestrator |  "changed": false, 2025-05-28 17:33:27.478601 | orchestrator |  "msg": "All assertions passed" 2025-05-28 17:33:27.478613 | orchestrator | } 2025-05-28 17:33:27.478625 | orchestrator | 2025-05-28 17:33:27.478639 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-05-28 17:33:27.478666 | orchestrator | Wednesday 28 May 2025 17:29:13 +0000 (0:00:00.707) 0:00:05.899 ********* 2025-05-28 17:33:27.478679 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:33:27.478691 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:33:27.478703 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:33:27.478713 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:33:27.478724 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:33:27.478735 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:33:27.478745 | orchestrator | 2025-05-28 17:33:27.478756 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-05-28 17:33:27.478767 | orchestrator | Wednesday 28 May 2025 17:29:14 +0000 (0:00:00.577) 0:00:06.476 ********* 2025-05-28 17:33:27.478777 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-05-28 17:33:27.478788 | orchestrator | 2025-05-28 17:33:27.478798 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-05-28 17:33:27.478809 | orchestrator | Wednesday 28 May 2025 17:29:17 +0000 (0:00:03.501) 0:00:09.978 ********* 2025-05-28 17:33:27.478820 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-05-28 17:33:27.478831 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-05-28 17:33:27.478842 | orchestrator | 2025-05-28 17:33:27.478924 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-05-28 17:33:27.478939 | orchestrator | Wednesday 28 May 2025 17:29:24 +0000 (0:00:06.122) 0:00:16.100 ********* 2025-05-28 17:33:27.478950 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-28 17:33:27.478960 | orchestrator | 2025-05-28 17:33:27.478971 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-05-28 17:33:27.478982 | orchestrator | Wednesday 28 May 2025 17:29:27 +0000 (0:00:03.203) 0:00:19.304 ********* 2025-05-28 17:33:27.478993 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-28 17:33:27.479004 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-05-28 17:33:27.479014 | orchestrator | 2025-05-28 17:33:27.479025 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-05-28 17:33:27.479036 | orchestrator | Wednesday 28 May 2025 17:29:31 +0000 (0:00:03.933) 0:00:23.237 ********* 2025-05-28 17:33:27.479046 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-28 17:33:27.479057 | orchestrator | 2025-05-28 17:33:27.479067 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-05-28 17:33:27.479078 | orchestrator | Wednesday 28 May 2025 17:29:34 +0000 (0:00:03.465) 0:00:26.703 ********* 2025-05-28 17:33:27.479088 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-05-28 17:33:27.479099 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-05-28 17:33:27.479110 | orchestrator | 2025-05-28 17:33:27.479120 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-28 17:33:27.479131 | orchestrator | Wednesday 28 May 2025 17:29:42 +0000 (0:00:07.519) 0:00:34.222 ********* 2025-05-28 17:33:27.479141 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:33:27.479152 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:33:27.479163 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:33:27.479173 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:33:27.479184 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:33:27.479194 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:33:27.479205 | orchestrator | 2025-05-28 17:33:27.479215 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-05-28 17:33:27.479226 | orchestrator | Wednesday 28 May 2025 17:29:42 +0000 (0:00:00.731) 0:00:34.954 ********* 2025-05-28 17:33:27.479237 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:33:27.479247 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:33:27.479258 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:33:27.479268 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:33:27.479339 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:33:27.479350 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:33:27.479361 | orchestrator | 2025-05-28 17:33:27.479372 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-05-28 17:33:27.479383 | orchestrator | Wednesday 28 May 2025 17:29:45 +0000 (0:00:02.219) 0:00:37.173 ********* 2025-05-28 17:33:27.479394 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:33:27.479404 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:33:27.479415 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:33:27.479426 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:33:27.479444 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:33:27.479455 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:33:27.479465 | orchestrator | 2025-05-28 17:33:27.479476 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-05-28 17:33:27.479487 | orchestrator | Wednesday 28 May 2025 17:29:46 +0000 (0:00:01.053) 0:00:38.227 ********* 2025-05-28 17:33:27.479498 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:33:27.479509 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:33:27.479519 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:33:27.479530 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:33:27.479541 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:33:27.479552 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:33:27.479562 | orchestrator | 2025-05-28 17:33:27.479573 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-05-28 17:33:27.479584 | orchestrator | Wednesday 28 May 2025 17:29:48 +0000 (0:00:02.028) 0:00:40.255 ********* 2025-05-28 17:33:27.479599 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-28 17:33:27.479646 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-28 17:33:27.479660 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-28 17:33:27.479680 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-28 17:33:27.479697 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-28 17:33:27.479709 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-28 17:33:27.479721 | orchestrator | 2025-05-28 17:33:27.479732 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-05-28 17:33:27.479743 | orchestrator | Wednesday 28 May 2025 17:29:50 +0000 (0:00:02.756) 0:00:43.011 ********* 2025-05-28 17:33:27.479754 | orchestrator | [WARNING]: Skipped 2025-05-28 17:33:27.479765 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-05-28 17:33:27.479776 | orchestrator | due to this access issue: 2025-05-28 17:33:27.479787 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-05-28 17:33:27.479798 | orchestrator | a directory 2025-05-28 17:33:27.479809 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-28 17:33:27.479819 | orchestrator | 2025-05-28 17:33:27.479830 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-28 17:33:27.479866 | orchestrator | Wednesday 28 May 2025 17:29:51 +0000 (0:00:00.857) 0:00:43.869 ********* 2025-05-28 17:33:27.479932 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 17:33:27.479948 | orchestrator | 2025-05-28 17:33:27.479959 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-05-28 17:33:27.479970 | orchestrator | Wednesday 28 May 2025 17:29:53 +0000 (0:00:01.233) 0:00:45.103 ********* 2025-05-28 17:33:27.479981 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-28 17:33:27.480007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-28 17:33:27.480019 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-28 17:33:27.480031 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-28 17:33:27.480074 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-28 17:33:27.480096 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-28 17:33:27.480107 | orchestrator | 2025-05-28 17:33:27.480118 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-05-28 17:33:27.480129 | orchestrator | Wednesday 28 May 2025 17:29:56 +0000 (0:00:03.351) 0:00:48.455 ********* 2025-05-28 17:33:27.480168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-28 17:33:27.480180 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:33:27.480192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-28 17:33:27.480203 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:33:27.480214 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 17:33:27.480225 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:33:27.480264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-28 17:33:27.480286 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:33:27.480297 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 17:33:27.480308 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:33:27.480325 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 17:33:27.480336 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:33:27.480347 | orchestrator | 2025-05-28 17:33:27.480357 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-05-28 17:33:27.480368 | orchestrator | Wednesday 28 May 2025 17:29:59 +0000 (0:00:03.485) 0:00:51.940 ********* 2025-05-28 17:33:27.480379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-28 17:33:27.480390 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:33:27.480409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-28 17:33:27.480429 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:33:27.480440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-28 17:33:27.480452 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:33:27.480463 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 17:33:27.480474 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:33:27.480509 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 17:33:27.480617 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:33:27.480633 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 17:33:27.480669 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:33:27.480680 | orchestrator | 2025-05-28 17:33:27.480690 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-05-28 17:33:27.480701 | orchestrator | Wednesday 28 May 2025 17:30:02 +0000 (0:00:03.015) 0:00:54.956 ********* 2025-05-28 17:33:27.480712 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:33:27.480722 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:33:27.480733 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:33:27.480744 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:33:27.480754 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:33:27.480845 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:33:27.480858 | orchestrator | 2025-05-28 17:33:27.480869 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-05-28 17:33:27.480944 | orchestrator | Wednesday 28 May 2025 17:30:05 +0000 (0:00:02.821) 0:00:57.777 ********* 2025-05-28 17:33:27.480958 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:33:27.480969 | orchestrator | 2025-05-28 17:33:27.480980 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-05-28 17:33:27.480990 | orchestrator | Wednesday 28 May 2025 17:30:05 +0000 (0:00:00.137) 0:00:57.915 ********* 2025-05-28 17:33:27.481001 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:33:27.481011 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:33:27.481022 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:33:27.481032 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:33:27.481043 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:33:27.481054 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:33:27.481064 | orchestrator | 2025-05-28 17:33:27.481075 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-05-28 17:33:27.481085 | orchestrator | Wednesday 28 May 2025 17:30:06 +0000 (0:00:00.795) 0:00:58.711 ********* 2025-05-28 17:33:27.481096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-28 17:33:27.481108 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:33:27.481135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-28 17:33:27.481149 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:33:27.481160 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 17:33:27.481181 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:33:27.481202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-28 17:33:27.481214 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:33:27.481225 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 17:33:27.481236 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:33:27.481247 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 17:33:27.481258 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:33:27.481269 | orchestrator | 2025-05-28 17:33:27.481279 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-05-28 17:33:27.481290 | orchestrator | Wednesday 28 May 2025 17:30:09 +0000 (0:00:02.821) 0:01:01.532 ********* 2025-05-28 17:33:27.481309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-28 17:33:27.481328 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-28 17:33:27.481348 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-28 17:33:27.481360 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-28 17:33:27.481372 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-28 17:33:27.481389 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-28 17:33:27.481407 | orchestrator | 2025-05-28 17:33:27.481418 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-05-28 17:33:27.481429 | orchestrator | Wednesday 28 May 2025 17:30:13 +0000 (0:00:03.618) 0:01:05.151 ********* 2025-05-28 17:33:27.481440 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-28 17:33:27.481459 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-28 17:33:27.481471 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-28 17:33:27.481485 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-28 17:33:27.481510 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-28 17:33:27.481524 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-28 17:33:27.481536 | orchestrator | 2025-05-28 17:33:27.481548 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-05-28 17:33:27.481561 | orchestrator | Wednesday 28 May 2025 17:30:19 +0000 (0:00:06.721) 0:01:11.873 ********* 2025-05-28 17:33:27.481581 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-28 17:33:27.481595 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 17:33:27.481608 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:33:27.481625 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 17:33:27.481645 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:33:27.481658 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 17:33:27.481671 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:33:27.481683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-28 17:33:27.481705 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-28 17:33:27.481717 | orchestrator | 2025-05-28 17:33:27.481729 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-05-28 17:33:27.481742 | orchestrator | Wednesday 28 May 2025 17:30:23 +0000 (0:00:03.647) 0:01:15.521 ********* 2025-05-28 17:33:27.481754 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:33:27.481766 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:33:27.481778 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:33:27.481790 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:33:27.481802 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:33:27.481814 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:33:27.481826 | orchestrator | 2025-05-28 17:33:27.481839 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-05-28 17:33:27.481850 | orchestrator | Wednesday 28 May 2025 17:30:26 +0000 (0:00:02.783) 0:01:18.305 ********* 2025-05-28 17:33:27.481868 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 17:33:27.481901 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:33:27.481917 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 17:33:27.481929 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:33:27.481940 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 17:33:27.481951 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:33:27.481970 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-28 17:33:27.481982 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-28 17:33:27.482009 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-28 17:33:27.482053 | orchestrator | 2025-05-28 17:33:27.482065 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-05-28 17:33:27.482076 | orchestrator | Wednesday 28 May 2025 17:30:30 +0000 (0:00:04.143) 0:01:22.448 ********* 2025-05-28 17:33:27.482087 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:33:27.482097 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:33:27.482108 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:33:27.482119 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:33:27.482130 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:33:27.482141 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:33:27.482151 | orchestrator | 2025-05-28 17:33:27.482162 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-05-28 17:33:27.482173 | orchestrator | Wednesday 28 May 2025 17:30:33 +0000 (0:00:03.373) 0:01:25.821 ********* 2025-05-28 17:33:27.482184 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:33:27.482194 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:33:27.482205 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:33:27.482215 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:33:27.482226 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:33:27.482237 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:33:27.482247 | orchestrator | 2025-05-28 17:33:27.482258 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-05-28 17:33:27.482269 | orchestrator | Wednesday 28 May 2025 17:30:36 +0000 (0:00:02.880) 0:01:28.702 ********* 2025-05-28 17:33:27.482280 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:33:27.482291 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:33:27.482301 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:33:27.482312 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:33:27.482322 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:33:27.482333 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:33:27.482344 | orchestrator | 2025-05-28 17:33:27.482354 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-05-28 17:33:27.482365 | orchestrator | Wednesday 28 May 2025 17:30:39 +0000 (0:00:02.449) 0:01:31.151 ********* 2025-05-28 17:33:27.482376 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:33:27.482386 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:33:27.482397 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:33:27.482408 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:33:27.482418 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:33:27.482429 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:33:27.482440 | orchestrator | 2025-05-28 17:33:27.482451 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-05-28 17:33:27.482462 | orchestrator | Wednesday 28 May 2025 17:30:41 +0000 (0:00:02.729) 0:01:33.880 ********* 2025-05-28 17:33:27.482535 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:33:27.482548 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:33:27.482558 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:33:27.482569 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:33:27.482580 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:33:27.482591 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:33:27.482601 | orchestrator | 2025-05-28 17:33:27.482619 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-05-28 17:33:27.482630 | orchestrator | Wednesday 28 May 2025 17:30:44 +0000 (0:00:02.600) 0:01:36.480 ********* 2025-05-28 17:33:27.482641 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:33:27.482652 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:33:27.482662 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:33:27.482673 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:33:27.482683 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:33:27.482694 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:33:27.482705 | orchestrator | 2025-05-28 17:33:27.482715 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-05-28 17:33:27.482726 | orchestrator | Wednesday 28 May 2025 17:30:46 +0000 (0:00:02.207) 0:01:38.688 ********* 2025-05-28 17:33:27.482737 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-28 17:33:27.482748 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:33:27.482758 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-28 17:33:27.482769 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:33:27.482780 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-28 17:33:27.482790 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:33:27.482801 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-28 17:33:27.482812 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:33:27.482822 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-28 17:33:27.482833 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:33:27.482844 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-28 17:33:27.482855 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:33:27.482865 | orchestrator | 2025-05-28 17:33:27.483067 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-05-28 17:33:27.483089 | orchestrator | Wednesday 28 May 2025 17:30:48 +0000 (0:00:01.721) 0:01:40.409 ********* 2025-05-28 17:33:27.483108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-28 17:33:27.483121 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:33:27.483132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-28 17:33:27.483153 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:33:27.483164 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 17:33:27.483183 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:33:27.483195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-28 17:33:27.483207 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:33:27.483218 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 17:33:27.483230 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:33:27.483245 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 17:33:27.483257 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:33:27.483276 | orchestrator | 2025-05-28 17:33:27.483287 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-05-28 17:33:27.483298 | orchestrator | Wednesday 28 May 2025 17:30:51 +0000 (0:00:03.197) 0:01:43.606 ********* 2025-05-28 17:33:27.483309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-28 17:33:27.483328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-28 17:33:27.483340 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:33:27.483351 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:33:27.483362 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 17:33:27.483373 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:33:27.483389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-28 17:33:27.483401 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:33:27.483412 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 17:33:27.483430 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:33:27.483441 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 17:33:27.483452 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:33:27.483462 | orchestrator | 2025-05-28 17:33:27.483472 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-05-28 17:33:27.483481 | orchestrator | Wednesday 28 May 2025 17:30:53 +0000 (0:00:02.190) 0:01:45.797 ********* 2025-05-28 17:33:27.483491 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:33:27.483500 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:33:27.483510 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:33:27.483519 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:33:27.483528 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:33:27.483542 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:33:27.483552 | orchestrator | 2025-05-28 17:33:27.483562 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-05-28 17:33:27.483572 | orchestrator | Wednesday 28 May 2025 17:30:56 +0000 (0:00:02.442) 0:01:48.240 ********* 2025-05-28 17:33:27.483582 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:33:27.483591 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:33:27.483601 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:33:27.483610 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:33:27.483620 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:33:27.483630 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:33:27.483640 | orchestrator | 2025-05-28 17:33:27.483649 | orchestrator | TASK [neutron : Copying over neutron_ovn_vpn_agent.ini] ************************ 2025-05-28 17:33:27.483659 | orchestrator | Wednesday 28 May 2025 17:30:59 +0000 (0:00:03.676) 0:01:51.916 ********* 2025-05-28 17:33:27.483668 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:33:27.483678 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:33:27.483688 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:33:27.483697 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:33:27.483707 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:33:27.483716 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:33:27.483726 | orchestrator | 2025-05-28 17:33:27.483736 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-05-28 17:33:27.483745 | orchestrator | Wednesday 28 May 2025 17:31:02 +0000 (0:00:02.566) 0:01:54.483 ********* 2025-05-28 17:33:27.483755 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:33:27.483764 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:33:27.483774 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:33:27.483784 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:33:27.483799 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:33:27.483808 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:33:27.483818 | orchestrator | 2025-05-28 17:33:27.483827 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-05-28 17:33:27.483837 | orchestrator | Wednesday 28 May 2025 17:31:04 +0000 (0:00:02.228) 0:01:56.711 ********* 2025-05-28 17:33:27.483846 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:33:27.483856 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:33:27.483865 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:33:27.483874 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:33:27.483902 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:33:27.483912 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:33:27.483921 | orchestrator | 2025-05-28 17:33:27.483931 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-05-28 17:33:27.483941 | orchestrator | Wednesday 28 May 2025 17:31:08 +0000 (0:00:03.794) 0:02:00.506 ********* 2025-05-28 17:33:27.483950 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:33:27.483959 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:33:27.483969 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:33:27.483978 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:33:27.483988 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:33:27.483997 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:33:27.484007 | orchestrator | 2025-05-28 17:33:27.484021 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-05-28 17:33:27.484031 | orchestrator | Wednesday 28 May 2025 17:31:10 +0000 (0:00:02.517) 0:02:03.023 ********* 2025-05-28 17:33:27.484040 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:33:27.484050 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:33:27.484059 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:33:27.484069 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:33:27.484078 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:33:27.484088 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:33:27.484097 | orchestrator | 2025-05-28 17:33:27.484107 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-05-28 17:33:27.484116 | orchestrator | Wednesday 28 May 2025 17:31:13 +0000 (0:00:02.656) 0:02:05.680 ********* 2025-05-28 17:33:27.484126 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:33:27.484135 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:33:27.484145 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:33:27.484154 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:33:27.484164 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:33:27.484173 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:33:27.484183 | orchestrator | 2025-05-28 17:33:27.484192 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-05-28 17:33:27.484202 | orchestrator | Wednesday 28 May 2025 17:31:17 +0000 (0:00:03.563) 0:02:09.244 ********* 2025-05-28 17:33:27.484211 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:33:27.484221 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:33:27.484230 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:33:27.484240 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:33:27.484249 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:33:27.484259 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:33:27.484268 | orchestrator | 2025-05-28 17:33:27.484278 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-05-28 17:33:27.484287 | orchestrator | Wednesday 28 May 2025 17:31:20 +0000 (0:00:03.166) 0:02:12.410 ********* 2025-05-28 17:33:27.484297 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:33:27.484306 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:33:27.484316 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:33:27.484325 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:33:27.484334 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:33:27.484344 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:33:27.484359 | orchestrator | 2025-05-28 17:33:27.484369 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-05-28 17:33:27.484378 | orchestrator | Wednesday 28 May 2025 17:31:23 +0000 (0:00:03.085) 0:02:15.496 ********* 2025-05-28 17:33:27.484388 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-28 17:33:27.484398 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:33:27.484407 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-28 17:33:27.484417 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:33:27.484432 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-28 17:33:27.484442 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:33:27.484452 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-28 17:33:27.484462 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:33:27.484471 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-28 17:33:27.484481 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:33:27.484491 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-28 17:33:27.484500 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:33:27.484510 | orchestrator | 2025-05-28 17:33:27.484520 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-05-28 17:33:27.484529 | orchestrator | Wednesday 28 May 2025 17:31:27 +0000 (0:00:03.684) 0:02:19.180 ********* 2025-05-28 17:33:27.484539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-28 17:33:27.484550 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:33:27.484564 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 17:33:27.484574 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:33:27.484584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-28 17:33:27.484600 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:33:27.484615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-28 17:33:27.484625 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:33:27.484635 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 17:33:27.484645 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:33:27.484655 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 17:33:27.484665 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:33:27.484674 | orchestrator | 2025-05-28 17:33:27.484684 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-05-28 17:33:27.484694 | orchestrator | Wednesday 28 May 2025 17:31:30 +0000 (0:00:02.915) 0:02:22.095 ********* 2025-05-28 17:33:27.484708 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-28 17:33:27.484724 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-28 17:33:27.484743 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-28 17:33:27.484753 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-28 17:33:27.484764 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-28 17:33:27.484781 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-28 17:33:27.484798 | orchestrator | 2025-05-28 17:33:27.484808 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-28 17:33:27.484818 | orchestrator | Wednesday 28 May 2025 17:31:32 +0000 (0:00:02.760) 0:02:24.856 ********* 2025-05-28 17:33:27.484827 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:33:27.484837 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:33:27.484847 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:33:27.484856 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:33:27.484866 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:33:27.484875 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:33:27.484899 | orchestrator | 2025-05-28 17:33:27.484909 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-05-28 17:33:27.484918 | orchestrator | Wednesday 28 May 2025 17:31:33 +0000 (0:00:00.594) 0:02:25.450 ********* 2025-05-28 17:33:27.484928 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:33:27.484937 | orchestrator | 2025-05-28 17:33:27.484947 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-05-28 17:33:27.484956 | orchestrator | Wednesday 28 May 2025 17:31:35 +0000 (0:00:02.100) 0:02:27.551 ********* 2025-05-28 17:33:27.484966 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:33:27.484976 | orchestrator | 2025-05-28 17:33:27.484985 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-05-28 17:33:27.484995 | orchestrator | Wednesday 28 May 2025 17:31:37 +0000 (0:00:02.155) 0:02:29.707 ********* 2025-05-28 17:33:27.485004 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:33:27.485014 | orchestrator | 2025-05-28 17:33:27.485024 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-28 17:33:27.485033 | orchestrator | Wednesday 28 May 2025 17:32:25 +0000 (0:00:47.800) 0:03:17.508 ********* 2025-05-28 17:33:27.485043 | orchestrator | 2025-05-28 17:33:27.485052 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-28 17:33:27.485062 | orchestrator | Wednesday 28 May 2025 17:32:25 +0000 (0:00:00.152) 0:03:17.660 ********* 2025-05-28 17:33:27.485072 | orchestrator | 2025-05-28 17:33:27.485081 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-28 17:33:27.485096 | orchestrator | Wednesday 28 May 2025 17:32:25 +0000 (0:00:00.373) 0:03:18.034 ********* 2025-05-28 17:33:27.485106 | orchestrator | 2025-05-28 17:33:27.485116 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-28 17:33:27.485125 | orchestrator | Wednesday 28 May 2025 17:32:26 +0000 (0:00:00.068) 0:03:18.102 ********* 2025-05-28 17:33:27.485135 | orchestrator | 2025-05-28 17:33:27.485145 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-28 17:33:27.485154 | orchestrator | Wednesday 28 May 2025 17:32:26 +0000 (0:00:00.084) 0:03:18.187 ********* 2025-05-28 17:33:27.485164 | orchestrator | 2025-05-28 17:33:27.485173 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-28 17:33:27.485183 | orchestrator | Wednesday 28 May 2025 17:32:26 +0000 (0:00:00.114) 0:03:18.302 ********* 2025-05-28 17:33:27.485192 | orchestrator | 2025-05-28 17:33:27.485202 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-05-28 17:33:27.485212 | orchestrator | Wednesday 28 May 2025 17:32:26 +0000 (0:00:00.068) 0:03:18.370 ********* 2025-05-28 17:33:27.485221 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:33:27.485231 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:33:27.485241 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:33:27.485250 | orchestrator | 2025-05-28 17:33:27.485260 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-05-28 17:33:27.485270 | orchestrator | Wednesday 28 May 2025 17:32:52 +0000 (0:00:26.590) 0:03:44.960 ********* 2025-05-28 17:33:27.485279 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:33:27.485295 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:33:27.485305 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:33:27.485314 | orchestrator | 2025-05-28 17:33:27.485324 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 17:33:27.485334 | orchestrator | testbed-node-0 : ok=27  changed=16  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-28 17:33:27.485345 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-05-28 17:33:27.485354 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-05-28 17:33:27.485364 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-05-28 17:33:27.485374 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-05-28 17:33:27.485388 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-05-28 17:33:27.485398 | orchestrator | 2025-05-28 17:33:27.485408 | orchestrator | 2025-05-28 17:33:27.485417 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 17:33:27.485427 | orchestrator | Wednesday 28 May 2025 17:33:24 +0000 (0:00:31.243) 0:04:16.204 ********* 2025-05-28 17:33:27.485437 | orchestrator | =============================================================================== 2025-05-28 17:33:27.485446 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 47.80s 2025-05-28 17:33:27.485456 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 31.24s 2025-05-28 17:33:27.485465 | orchestrator | neutron : Restart neutron-server container ----------------------------- 26.59s 2025-05-28 17:33:27.485475 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.52s 2025-05-28 17:33:27.485484 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 6.72s 2025-05-28 17:33:27.485493 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.12s 2025-05-28 17:33:27.485503 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 4.14s 2025-05-28 17:33:27.485512 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.93s 2025-05-28 17:33:27.485522 | orchestrator | neutron : Copying over ironic_neutron_agent.ini ------------------------- 3.79s 2025-05-28 17:33:27.485532 | orchestrator | neutron : Copying over neutron-tls-proxy.cfg ---------------------------- 3.68s 2025-05-28 17:33:27.485541 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 3.68s 2025-05-28 17:33:27.485551 | orchestrator | neutron : Copying over neutron_vpnaas.conf ------------------------------ 3.65s 2025-05-28 17:33:27.485560 | orchestrator | neutron : Copying over config.json files for services ------------------- 3.62s 2025-05-28 17:33:27.485570 | orchestrator | neutron : Copying over nsx.ini ------------------------------------------ 3.56s 2025-05-28 17:33:27.485579 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.50s 2025-05-28 17:33:27.485589 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 3.49s 2025-05-28 17:33:27.485598 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.47s 2025-05-28 17:33:27.485608 | orchestrator | neutron : Copying over linuxbridge_agent.ini ---------------------------- 3.37s 2025-05-28 17:33:27.485617 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.35s 2025-05-28 17:33:27.485627 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.20s 2025-05-28 17:33:27.485641 | orchestrator | 2025-05-28 17:33:27 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:33:27.485657 | orchestrator | 2025-05-28 17:33:27 | INFO  | Task a0f9f78f-e6a3-424a-b187-135b859fe70c is in state STARTED 2025-05-28 17:33:27.485667 | orchestrator | 2025-05-28 17:33:27 | INFO  | Task 0300d580-3d9b-4dca-a827-1744c7b46ba9 is in state STARTED 2025-05-28 17:33:27.485677 | orchestrator | 2025-05-28 17:33:27 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:33:30.522376 | orchestrator | 2025-05-28 17:33:30 | INFO  | Task ef03f1a9-5db9-4885-b01e-f3b1509baaf4 is in state STARTED 2025-05-28 17:33:30.524386 | orchestrator | 2025-05-28 17:33:30 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:33:30.526570 | orchestrator | 2025-05-28 17:33:30 | INFO  | Task a0f9f78f-e6a3-424a-b187-135b859fe70c is in state STARTED 2025-05-28 17:33:30.531388 | orchestrator | 2025-05-28 17:33:30 | INFO  | Task 0300d580-3d9b-4dca-a827-1744c7b46ba9 is in state STARTED 2025-05-28 17:33:30.531458 | orchestrator | 2025-05-28 17:33:30 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:33:33.571566 | orchestrator | 2025-05-28 17:33:33 | INFO  | Task ef03f1a9-5db9-4885-b01e-f3b1509baaf4 is in state STARTED 2025-05-28 17:33:33.571686 | orchestrator | 2025-05-28 17:33:33 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:33:33.572526 | orchestrator | 2025-05-28 17:33:33 | INFO  | Task a0f9f78f-e6a3-424a-b187-135b859fe70c is in state STARTED 2025-05-28 17:33:33.573170 | orchestrator | 2025-05-28 17:33:33 | INFO  | Task 0300d580-3d9b-4dca-a827-1744c7b46ba9 is in state STARTED 2025-05-28 17:33:33.573207 | orchestrator | 2025-05-28 17:33:33 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:33:36.615121 | orchestrator | 2025-05-28 17:33:36 | INFO  | Task ef03f1a9-5db9-4885-b01e-f3b1509baaf4 is in state STARTED 2025-05-28 17:33:36.616608 | orchestrator | 2025-05-28 17:33:36 | INFO  | Task e49c5002-148d-4c40-bd75-d8b100838107 is in state STARTED 2025-05-28 17:33:36.618650 | orchestrator | 2025-05-28 17:33:36 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:33:36.620641 | orchestrator | 2025-05-28 17:33:36 | INFO  | Task a0f9f78f-e6a3-424a-b187-135b859fe70c is in state STARTED 2025-05-28 17:33:36.625524 | orchestrator | 2025-05-28 17:33:36.625591 | orchestrator | 2025-05-28 17:33:36.625640 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-28 17:33:36.625663 | orchestrator | 2025-05-28 17:33:36.625681 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-28 17:33:36.625700 | orchestrator | Wednesday 28 May 2025 17:32:21 +0000 (0:00:00.272) 0:00:00.272 ********* 2025-05-28 17:33:36.625720 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:33:36.625740 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:33:36.625758 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:33:36.625777 | orchestrator | 2025-05-28 17:33:36.625794 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-28 17:33:36.625814 | orchestrator | Wednesday 28 May 2025 17:32:21 +0000 (0:00:00.360) 0:00:00.632 ********* 2025-05-28 17:33:36.625834 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-05-28 17:33:36.625853 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-05-28 17:33:36.625903 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-05-28 17:33:36.625917 | orchestrator | 2025-05-28 17:33:36.625928 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-05-28 17:33:36.625939 | orchestrator | 2025-05-28 17:33:36.625949 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-05-28 17:33:36.625960 | orchestrator | Wednesday 28 May 2025 17:32:22 +0000 (0:00:01.021) 0:00:01.654 ********* 2025-05-28 17:33:36.625971 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:33:36.626011 | orchestrator | 2025-05-28 17:33:36.626105 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-05-28 17:33:36.626119 | orchestrator | Wednesday 28 May 2025 17:32:23 +0000 (0:00:00.682) 0:00:02.337 ********* 2025-05-28 17:33:36.626133 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-05-28 17:33:36.626145 | orchestrator | 2025-05-28 17:33:36.626157 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-05-28 17:33:36.626169 | orchestrator | Wednesday 28 May 2025 17:32:26 +0000 (0:00:03.527) 0:00:05.864 ********* 2025-05-28 17:33:36.626181 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-05-28 17:33:36.626194 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-05-28 17:33:36.626206 | orchestrator | 2025-05-28 17:33:36.626218 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-05-28 17:33:36.626230 | orchestrator | Wednesday 28 May 2025 17:32:33 +0000 (0:00:06.455) 0:00:12.320 ********* 2025-05-28 17:33:36.626243 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-28 17:33:36.626255 | orchestrator | 2025-05-28 17:33:36.626267 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-05-28 17:33:36.626280 | orchestrator | Wednesday 28 May 2025 17:32:36 +0000 (0:00:03.377) 0:00:15.698 ********* 2025-05-28 17:33:36.626292 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-28 17:33:36.626304 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-05-28 17:33:36.626316 | orchestrator | 2025-05-28 17:33:36.626329 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-05-28 17:33:36.626341 | orchestrator | Wednesday 28 May 2025 17:32:40 +0000 (0:00:04.098) 0:00:19.796 ********* 2025-05-28 17:33:36.626353 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-28 17:33:36.626365 | orchestrator | 2025-05-28 17:33:36.626378 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-05-28 17:33:36.626391 | orchestrator | Wednesday 28 May 2025 17:32:44 +0000 (0:00:03.943) 0:00:23.740 ********* 2025-05-28 17:33:36.626402 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-05-28 17:33:36.626414 | orchestrator | 2025-05-28 17:33:36.626426 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-05-28 17:33:36.626437 | orchestrator | Wednesday 28 May 2025 17:32:48 +0000 (0:00:03.877) 0:00:27.617 ********* 2025-05-28 17:33:36.626448 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:33:36.626459 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:33:36.626469 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:33:36.626480 | orchestrator | 2025-05-28 17:33:36.626490 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-05-28 17:33:36.626501 | orchestrator | Wednesday 28 May 2025 17:32:48 +0000 (0:00:00.318) 0:00:27.936 ********* 2025-05-28 17:33:36.626516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-28 17:33:36.626557 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-28 17:33:36.626579 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-28 17:33:36.626591 | orchestrator | 2025-05-28 17:33:36.626601 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-05-28 17:33:36.626613 | orchestrator | Wednesday 28 May 2025 17:32:49 +0000 (0:00:00.832) 0:00:28.768 ********* 2025-05-28 17:33:36.626623 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:33:36.626634 | orchestrator | 2025-05-28 17:33:36.626645 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-05-28 17:33:36.626656 | orchestrator | Wednesday 28 May 2025 17:32:49 +0000 (0:00:00.136) 0:00:28.905 ********* 2025-05-28 17:33:36.626667 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:33:36.626677 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:33:36.626688 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:33:36.626699 | orchestrator | 2025-05-28 17:33:36.626709 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-05-28 17:33:36.626720 | orchestrator | Wednesday 28 May 2025 17:32:50 +0000 (0:00:00.472) 0:00:29.377 ********* 2025-05-28 17:33:36.626739 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:33:36.626759 | orchestrator | 2025-05-28 17:33:36.626777 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-05-28 17:33:36.626795 | orchestrator | Wednesday 28 May 2025 17:32:50 +0000 (0:00:00.503) 0:00:29.880 ********* 2025-05-28 17:33:36.626814 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-28 17:33:36.626892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-28 17:33:36.626909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-28 17:33:36.626920 | orchestrator | 2025-05-28 17:33:36.626931 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-05-28 17:33:36.626942 | orchestrator | Wednesday 28 May 2025 17:32:52 +0000 (0:00:01.459) 0:00:31.340 ********* 2025-05-28 17:33:36.626953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-28 17:33:36.626965 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:33:36.626976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-28 17:33:36.626994 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:33:36.627012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-28 17:33:36.627024 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:33:36.627035 | orchestrator | 2025-05-28 17:33:36.627046 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-05-28 17:33:36.627057 | orchestrator | Wednesday 28 May 2025 17:32:53 +0000 (0:00:01.450) 0:00:32.790 ********* 2025-05-28 17:33:36.627108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-28 17:33:36.627121 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:33:36.627132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-28 17:33:36.627143 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:33:36.627154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-28 17:33:36.627173 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:33:36.627184 | orchestrator | 2025-05-28 17:33:36.627194 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-05-28 17:33:36.627205 | orchestrator | Wednesday 28 May 2025 17:32:54 +0000 (0:00:01.203) 0:00:33.994 ********* 2025-05-28 17:33:36.627222 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-28 17:33:36.627238 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-28 17:33:36.627250 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-28 17:33:36.627262 | orchestrator | 2025-05-28 17:33:36.627273 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-05-28 17:33:36.627284 | orchestrator | Wednesday 28 May 2025 17:32:56 +0000 (0:00:01.551) 0:00:35.546 ********* 2025-05-28 17:33:36.627295 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-28 17:33:36.627320 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-28 17:33:36.627345 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-28 17:33:36.627356 | orchestrator | 2025-05-28 17:33:36.627367 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-05-28 17:33:36.627379 | orchestrator | Wednesday 28 May 2025 17:32:59 +0000 (0:00:03.003) 0:00:38.549 ********* 2025-05-28 17:33:36.627390 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-05-28 17:33:36.627401 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-05-28 17:33:36.627412 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-05-28 17:33:36.627422 | orchestrator | 2025-05-28 17:33:36.627433 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-05-28 17:33:36.627444 | orchestrator | Wednesday 28 May 2025 17:33:01 +0000 (0:00:01.959) 0:00:40.509 ********* 2025-05-28 17:33:36.627455 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:33:36.627466 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:33:36.627477 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:33:36.627487 | orchestrator | 2025-05-28 17:33:36.627498 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-05-28 17:33:36.627509 | orchestrator | Wednesday 28 May 2025 17:33:03 +0000 (0:00:01.772) 0:00:42.282 ********* 2025-05-28 17:33:36.627520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-28 17:33:36.627591 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:33:36.627605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-28 17:33:36.627616 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:33:36.627640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-28 17:33:36.627652 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:33:36.627663 | orchestrator | 2025-05-28 17:33:36.627674 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-05-28 17:33:36.627685 | orchestrator | Wednesday 28 May 2025 17:33:03 +0000 (0:00:00.551) 0:00:42.833 ********* 2025-05-28 17:33:36.627696 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-28 17:33:36.627708 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-28 17:33:36.627727 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-28 17:33:36.627738 | orchestrator | 2025-05-28 17:33:36.627749 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-05-28 17:33:36.627760 | orchestrator | Wednesday 28 May 2025 17:33:05 +0000 (0:00:01.688) 0:00:44.521 ********* 2025-05-28 17:33:36.627771 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:33:36.627782 | orchestrator | 2025-05-28 17:33:36.627793 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-05-28 17:33:36.627804 | orchestrator | Wednesday 28 May 2025 17:33:07 +0000 (0:00:02.166) 0:00:46.688 ********* 2025-05-28 17:33:36.627814 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:33:36.627825 | orchestrator | 2025-05-28 17:33:36.627836 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-05-28 17:33:36.627847 | orchestrator | Wednesday 28 May 2025 17:33:09 +0000 (0:00:01.850) 0:00:48.539 ********* 2025-05-28 17:33:36.627858 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:33:36.627896 | orchestrator | 2025-05-28 17:33:36.627908 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-05-28 17:33:36.627919 | orchestrator | Wednesday 28 May 2025 17:33:22 +0000 (0:00:13.183) 0:01:01.722 ********* 2025-05-28 17:33:36.627930 | orchestrator | 2025-05-28 17:33:36.627940 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-05-28 17:33:36.627951 | orchestrator | Wednesday 28 May 2025 17:33:22 +0000 (0:00:00.062) 0:01:01.785 ********* 2025-05-28 17:33:36.627962 | orchestrator | 2025-05-28 17:33:36.627979 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-05-28 17:33:36.627996 | orchestrator | Wednesday 28 May 2025 17:33:22 +0000 (0:00:00.058) 0:01:01.843 ********* 2025-05-28 17:33:36.628007 | orchestrator | 2025-05-28 17:33:36.628018 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-05-28 17:33:36.628029 | orchestrator | Wednesday 28 May 2025 17:33:22 +0000 (0:00:00.063) 0:01:01.907 ********* 2025-05-28 17:33:36.628039 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:33:36.628050 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:33:36.628061 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:33:36.628072 | orchestrator | 2025-05-28 17:33:36.628083 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 17:33:36.628095 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-28 17:33:36.628108 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-28 17:33:36.628119 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-28 17:33:36.628137 | orchestrator | 2025-05-28 17:33:36.628148 | orchestrator | 2025-05-28 17:33:36.628159 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 17:33:36.628169 | orchestrator | Wednesday 28 May 2025 17:33:33 +0000 (0:00:10.543) 0:01:12.450 ********* 2025-05-28 17:33:36.628180 | orchestrator | =============================================================================== 2025-05-28 17:33:36.628191 | orchestrator | placement : Running placement bootstrap container ---------------------- 13.18s 2025-05-28 17:33:36.628202 | orchestrator | placement : Restart placement-api container ---------------------------- 10.54s 2025-05-28 17:33:36.628291 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.46s 2025-05-28 17:33:36.628305 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.10s 2025-05-28 17:33:36.628316 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.94s 2025-05-28 17:33:36.628326 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 3.88s 2025-05-28 17:33:36.628337 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.53s 2025-05-28 17:33:36.628348 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.38s 2025-05-28 17:33:36.628358 | orchestrator | placement : Copying over placement.conf --------------------------------- 3.00s 2025-05-28 17:33:36.628369 | orchestrator | placement : Creating placement databases -------------------------------- 2.17s 2025-05-28 17:33:36.628380 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.96s 2025-05-28 17:33:36.628391 | orchestrator | placement : Creating placement databases user and setting permissions --- 1.85s 2025-05-28 17:33:36.628401 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.77s 2025-05-28 17:33:36.628412 | orchestrator | placement : Check placement containers ---------------------------------- 1.69s 2025-05-28 17:33:36.628422 | orchestrator | placement : Copying over config.json files for services ----------------- 1.55s 2025-05-28 17:33:36.628433 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.46s 2025-05-28 17:33:36.628444 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 1.45s 2025-05-28 17:33:36.628455 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 1.20s 2025-05-28 17:33:36.628466 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.02s 2025-05-28 17:33:36.628477 | orchestrator | placement : Ensuring config directories exist --------------------------- 0.83s 2025-05-28 17:33:36.628487 | orchestrator | 2025-05-28 17:33:36 | INFO  | Task 0300d580-3d9b-4dca-a827-1744c7b46ba9 is in state SUCCESS 2025-05-28 17:33:36.628499 | orchestrator | 2025-05-28 17:33:36 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:33:39.672994 | orchestrator | 2025-05-28 17:33:39 | INFO  | Task ef03f1a9-5db9-4885-b01e-f3b1509baaf4 is in state STARTED 2025-05-28 17:33:39.673974 | orchestrator | 2025-05-28 17:33:39 | INFO  | Task e49c5002-148d-4c40-bd75-d8b100838107 is in state STARTED 2025-05-28 17:33:39.675633 | orchestrator | 2025-05-28 17:33:39 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:33:39.677089 | orchestrator | 2025-05-28 17:33:39 | INFO  | Task a0f9f78f-e6a3-424a-b187-135b859fe70c is in state STARTED 2025-05-28 17:33:39.677195 | orchestrator | 2025-05-28 17:33:39 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:33:42.718836 | orchestrator | 2025-05-28 17:33:42 | INFO  | Task ef03f1a9-5db9-4885-b01e-f3b1509baaf4 is in state STARTED 2025-05-28 17:33:42.719611 | orchestrator | 2025-05-28 17:33:42 | INFO  | Task e49c5002-148d-4c40-bd75-d8b100838107 is in state SUCCESS 2025-05-28 17:33:42.724224 | orchestrator | 2025-05-28 17:33:42 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:33:42.727283 | orchestrator | 2025-05-28 17:33:42 | INFO  | Task a0f9f78f-e6a3-424a-b187-135b859fe70c is in state STARTED 2025-05-28 17:33:42.728494 | orchestrator | 2025-05-28 17:33:42 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:33:42.728537 | orchestrator | 2025-05-28 17:33:42 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:33:45.776715 | orchestrator | 2025-05-28 17:33:45 | INFO  | Task ef03f1a9-5db9-4885-b01e-f3b1509baaf4 is in state STARTED 2025-05-28 17:33:45.777473 | orchestrator | 2025-05-28 17:33:45 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:33:45.778544 | orchestrator | 2025-05-28 17:33:45 | INFO  | Task a0f9f78f-e6a3-424a-b187-135b859fe70c is in state STARTED 2025-05-28 17:33:45.780523 | orchestrator | 2025-05-28 17:33:45 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:33:45.780554 | orchestrator | 2025-05-28 17:33:45 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:33:48.824933 | orchestrator | 2025-05-28 17:33:48 | INFO  | Task ef03f1a9-5db9-4885-b01e-f3b1509baaf4 is in state STARTED 2025-05-28 17:33:48.826624 | orchestrator | 2025-05-28 17:33:48 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:33:48.828148 | orchestrator | 2025-05-28 17:33:48 | INFO  | Task a0f9f78f-e6a3-424a-b187-135b859fe70c is in state STARTED 2025-05-28 17:33:48.830175 | orchestrator | 2025-05-28 17:33:48 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:33:48.830199 | orchestrator | 2025-05-28 17:33:48 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:33:51.867003 | orchestrator | 2025-05-28 17:33:51 | INFO  | Task ef03f1a9-5db9-4885-b01e-f3b1509baaf4 is in state STARTED 2025-05-28 17:33:51.868793 | orchestrator | 2025-05-28 17:33:51 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:33:51.871011 | orchestrator | 2025-05-28 17:33:51 | INFO  | Task a0f9f78f-e6a3-424a-b187-135b859fe70c is in state STARTED 2025-05-28 17:33:51.872393 | orchestrator | 2025-05-28 17:33:51 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:33:51.872557 | orchestrator | 2025-05-28 17:33:51 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:33:54.915571 | orchestrator | 2025-05-28 17:33:54 | INFO  | Task ef03f1a9-5db9-4885-b01e-f3b1509baaf4 is in state STARTED 2025-05-28 17:33:54.916153 | orchestrator | 2025-05-28 17:33:54 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:33:54.917489 | orchestrator | 2025-05-28 17:33:54 | INFO  | Task a0f9f78f-e6a3-424a-b187-135b859fe70c is in state STARTED 2025-05-28 17:33:54.918459 | orchestrator | 2025-05-28 17:33:54 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:33:54.918494 | orchestrator | 2025-05-28 17:33:54 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:33:57.966163 | orchestrator | 2025-05-28 17:33:57 | INFO  | Task ef03f1a9-5db9-4885-b01e-f3b1509baaf4 is in state STARTED 2025-05-28 17:33:57.968687 | orchestrator | 2025-05-28 17:33:57 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:33:57.972454 | orchestrator | 2025-05-28 17:33:57 | INFO  | Task a0f9f78f-e6a3-424a-b187-135b859fe70c is in state STARTED 2025-05-28 17:33:57.974626 | orchestrator | 2025-05-28 17:33:57 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:33:57.974656 | orchestrator | 2025-05-28 17:33:57 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:34:01.015960 | orchestrator | 2025-05-28 17:34:01 | INFO  | Task ef03f1a9-5db9-4885-b01e-f3b1509baaf4 is in state STARTED 2025-05-28 17:34:01.017987 | orchestrator | 2025-05-28 17:34:01 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:34:01.020025 | orchestrator | 2025-05-28 17:34:01 | INFO  | Task a0f9f78f-e6a3-424a-b187-135b859fe70c is in state STARTED 2025-05-28 17:34:01.020977 | orchestrator | 2025-05-28 17:34:01 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:34:01.021013 | orchestrator | 2025-05-28 17:34:01 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:34:04.062379 | orchestrator | 2025-05-28 17:34:04 | INFO  | Task ef03f1a9-5db9-4885-b01e-f3b1509baaf4 is in state STARTED 2025-05-28 17:34:04.062510 | orchestrator | 2025-05-28 17:34:04 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:34:04.063558 | orchestrator | 2025-05-28 17:34:04 | INFO  | Task a0f9f78f-e6a3-424a-b187-135b859fe70c is in state STARTED 2025-05-28 17:34:04.065087 | orchestrator | 2025-05-28 17:34:04 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:34:04.065161 | orchestrator | 2025-05-28 17:34:04 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:34:07.110208 | orchestrator | 2025-05-28 17:34:07 | INFO  | Task ef03f1a9-5db9-4885-b01e-f3b1509baaf4 is in state STARTED 2025-05-28 17:34:07.111368 | orchestrator | 2025-05-28 17:34:07 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:34:07.112540 | orchestrator | 2025-05-28 17:34:07 | INFO  | Task a0f9f78f-e6a3-424a-b187-135b859fe70c is in state STARTED 2025-05-28 17:34:07.113879 | orchestrator | 2025-05-28 17:34:07 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:34:07.113905 | orchestrator | 2025-05-28 17:34:07 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:34:10.154881 | orchestrator | 2025-05-28 17:34:10 | INFO  | Task ef03f1a9-5db9-4885-b01e-f3b1509baaf4 is in state STARTED 2025-05-28 17:34:10.160306 | orchestrator | 2025-05-28 17:34:10 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:34:10.161498 | orchestrator | 2025-05-28 17:34:10 | INFO  | Task a0f9f78f-e6a3-424a-b187-135b859fe70c is in state STARTED 2025-05-28 17:34:10.163108 | orchestrator | 2025-05-28 17:34:10 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:34:10.163280 | orchestrator | 2025-05-28 17:34:10 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:34:13.211077 | orchestrator | 2025-05-28 17:34:13 | INFO  | Task ef03f1a9-5db9-4885-b01e-f3b1509baaf4 is in state STARTED 2025-05-28 17:34:13.211876 | orchestrator | 2025-05-28 17:34:13 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:34:13.213695 | orchestrator | 2025-05-28 17:34:13 | INFO  | Task a0f9f78f-e6a3-424a-b187-135b859fe70c is in state STARTED 2025-05-28 17:34:13.214638 | orchestrator | 2025-05-28 17:34:13 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:34:13.214693 | orchestrator | 2025-05-28 17:34:13 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:34:16.251612 | orchestrator | 2025-05-28 17:34:16 | INFO  | Task ef03f1a9-5db9-4885-b01e-f3b1509baaf4 is in state STARTED 2025-05-28 17:34:16.252078 | orchestrator | 2025-05-28 17:34:16 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:34:16.253030 | orchestrator | 2025-05-28 17:34:16 | INFO  | Task a0f9f78f-e6a3-424a-b187-135b859fe70c is in state STARTED 2025-05-28 17:34:16.273974 | orchestrator | 2025-05-28 17:34:16 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:34:16.274163 | orchestrator | 2025-05-28 17:34:16 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:34:19.297127 | orchestrator | 2025-05-28 17:34:19 | INFO  | Task ef03f1a9-5db9-4885-b01e-f3b1509baaf4 is in state STARTED 2025-05-28 17:34:19.297279 | orchestrator | 2025-05-28 17:34:19 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:34:19.300000 | orchestrator | 2025-05-28 17:34:19 | INFO  | Task a0f9f78f-e6a3-424a-b187-135b859fe70c is in state STARTED 2025-05-28 17:34:19.300579 | orchestrator | 2025-05-28 17:34:19 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:34:19.301019 | orchestrator | 2025-05-28 17:34:19 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:34:22.345238 | orchestrator | 2025-05-28 17:34:22 | INFO  | Task ef03f1a9-5db9-4885-b01e-f3b1509baaf4 is in state STARTED 2025-05-28 17:34:22.345346 | orchestrator | 2025-05-28 17:34:22 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:34:22.345357 | orchestrator | 2025-05-28 17:34:22 | INFO  | Task a0f9f78f-e6a3-424a-b187-135b859fe70c is in state STARTED 2025-05-28 17:34:22.346344 | orchestrator | 2025-05-28 17:34:22 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:34:22.346555 | orchestrator | 2025-05-28 17:34:22 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:34:25.399632 | orchestrator | 2025-05-28 17:34:25 | INFO  | Task ef03f1a9-5db9-4885-b01e-f3b1509baaf4 is in state STARTED 2025-05-28 17:34:25.400615 | orchestrator | 2025-05-28 17:34:25 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:34:25.402148 | orchestrator | 2025-05-28 17:34:25 | INFO  | Task a0f9f78f-e6a3-424a-b187-135b859fe70c is in state STARTED 2025-05-28 17:34:25.403302 | orchestrator | 2025-05-28 17:34:25 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:34:25.403406 | orchestrator | 2025-05-28 17:34:25 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:34:28.461599 | orchestrator | 2025-05-28 17:34:28 | INFO  | Task ef03f1a9-5db9-4885-b01e-f3b1509baaf4 is in state STARTED 2025-05-28 17:34:28.461723 | orchestrator | 2025-05-28 17:34:28 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:34:28.464043 | orchestrator | 2025-05-28 17:34:28 | INFO  | Task a0f9f78f-e6a3-424a-b187-135b859fe70c is in state STARTED 2025-05-28 17:34:28.465413 | orchestrator | 2025-05-28 17:34:28 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:34:28.465443 | orchestrator | 2025-05-28 17:34:28 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:34:31.512890 | orchestrator | 2025-05-28 17:34:31 | INFO  | Task ef03f1a9-5db9-4885-b01e-f3b1509baaf4 is in state STARTED 2025-05-28 17:34:31.514865 | orchestrator | 2025-05-28 17:34:31 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:34:31.516597 | orchestrator | 2025-05-28 17:34:31 | INFO  | Task a0f9f78f-e6a3-424a-b187-135b859fe70c is in state STARTED 2025-05-28 17:34:31.518859 | orchestrator | 2025-05-28 17:34:31 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:34:31.518969 | orchestrator | 2025-05-28 17:34:31 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:34:34.577416 | orchestrator | 2025-05-28 17:34:34 | INFO  | Task ef03f1a9-5db9-4885-b01e-f3b1509baaf4 is in state STARTED 2025-05-28 17:34:34.578316 | orchestrator | 2025-05-28 17:34:34 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:34:34.581522 | orchestrator | 2025-05-28 17:34:34 | INFO  | Task a0f9f78f-e6a3-424a-b187-135b859fe70c is in state STARTED 2025-05-28 17:34:34.584878 | orchestrator | 2025-05-28 17:34:34 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:34:34.584953 | orchestrator | 2025-05-28 17:34:34 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:34:37.635513 | orchestrator | 2025-05-28 17:34:37 | INFO  | Task ef03f1a9-5db9-4885-b01e-f3b1509baaf4 is in state STARTED 2025-05-28 17:34:37.637111 | orchestrator | 2025-05-28 17:34:37 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:34:37.638890 | orchestrator | 2025-05-28 17:34:37 | INFO  | Task a0f9f78f-e6a3-424a-b187-135b859fe70c is in state STARTED 2025-05-28 17:34:37.641602 | orchestrator | 2025-05-28 17:34:37 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:34:37.641654 | orchestrator | 2025-05-28 17:34:37 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:34:40.690787 | orchestrator | 2025-05-28 17:34:40 | INFO  | Task ef03f1a9-5db9-4885-b01e-f3b1509baaf4 is in state STARTED 2025-05-28 17:34:40.691491 | orchestrator | 2025-05-28 17:34:40 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:34:40.691910 | orchestrator | 2025-05-28 17:34:40 | INFO  | Task a0f9f78f-e6a3-424a-b187-135b859fe70c is in state STARTED 2025-05-28 17:34:40.695012 | orchestrator | 2025-05-28 17:34:40 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:34:40.695076 | orchestrator | 2025-05-28 17:34:40 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:34:43.729806 | orchestrator | 2025-05-28 17:34:43 | INFO  | Task ef03f1a9-5db9-4885-b01e-f3b1509baaf4 is in state STARTED 2025-05-28 17:34:43.731996 | orchestrator | 2025-05-28 17:34:43 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:34:43.734640 | orchestrator | 2025-05-28 17:34:43 | INFO  | Task a0f9f78f-e6a3-424a-b187-135b859fe70c is in state STARTED 2025-05-28 17:34:43.736825 | orchestrator | 2025-05-28 17:34:43 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:34:43.737242 | orchestrator | 2025-05-28 17:34:43 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:34:46.784796 | orchestrator | 2025-05-28 17:34:46 | INFO  | Task ef03f1a9-5db9-4885-b01e-f3b1509baaf4 is in state STARTED 2025-05-28 17:34:46.784986 | orchestrator | 2025-05-28 17:34:46 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:34:46.786129 | orchestrator | 2025-05-28 17:34:46 | INFO  | Task a0f9f78f-e6a3-424a-b187-135b859fe70c is in state STARTED 2025-05-28 17:34:46.786164 | orchestrator | 2025-05-28 17:34:46 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:34:46.786176 | orchestrator | 2025-05-28 17:34:46 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:34:49.835670 | orchestrator | 2025-05-28 17:34:49 | INFO  | Task ef03f1a9-5db9-4885-b01e-f3b1509baaf4 is in state STARTED 2025-05-28 17:34:49.837706 | orchestrator | 2025-05-28 17:34:49 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:34:49.839544 | orchestrator | 2025-05-28 17:34:49 | INFO  | Task a0f9f78f-e6a3-424a-b187-135b859fe70c is in state STARTED 2025-05-28 17:34:49.841649 | orchestrator | 2025-05-28 17:34:49 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:34:49.841686 | orchestrator | 2025-05-28 17:34:49 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:34:52.886402 | orchestrator | 2025-05-28 17:34:52 | INFO  | Task ef03f1a9-5db9-4885-b01e-f3b1509baaf4 is in state STARTED 2025-05-28 17:34:52.887756 | orchestrator | 2025-05-28 17:34:52 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:34:52.890439 | orchestrator | 2025-05-28 17:34:52 | INFO  | Task a0f9f78f-e6a3-424a-b187-135b859fe70c is in state STARTED 2025-05-28 17:34:52.892892 | orchestrator | 2025-05-28 17:34:52 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:34:52.893172 | orchestrator | 2025-05-28 17:34:52 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:34:55.938692 | orchestrator | 2025-05-28 17:34:55 | INFO  | Task ef03f1a9-5db9-4885-b01e-f3b1509baaf4 is in state STARTED 2025-05-28 17:34:55.940033 | orchestrator | 2025-05-28 17:34:55 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:34:55.941916 | orchestrator | 2025-05-28 17:34:55 | INFO  | Task a0f9f78f-e6a3-424a-b187-135b859fe70c is in state STARTED 2025-05-28 17:34:55.945250 | orchestrator | 2025-05-28 17:34:55 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:34:55.945339 | orchestrator | 2025-05-28 17:34:55 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:34:58.994272 | orchestrator | 2025-05-28 17:34:58 | INFO  | Task ef03f1a9-5db9-4885-b01e-f3b1509baaf4 is in state STARTED 2025-05-28 17:34:58.995663 | orchestrator | 2025-05-28 17:34:58 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:34:58.998233 | orchestrator | 2025-05-28 17:34:58 | INFO  | Task a0f9f78f-e6a3-424a-b187-135b859fe70c is in state SUCCESS 2025-05-28 17:34:59.000292 | orchestrator | 2025-05-28 17:34:59.000335 | orchestrator | 2025-05-28 17:34:59.000349 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-28 17:34:59.000361 | orchestrator | 2025-05-28 17:34:59.000372 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-28 17:34:59.000384 | orchestrator | Wednesday 28 May 2025 17:33:39 +0000 (0:00:00.191) 0:00:00.191 ********* 2025-05-28 17:34:59.000395 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:34:59.000423 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:34:59.000435 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:34:59.000446 | orchestrator | 2025-05-28 17:34:59.000457 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-28 17:34:59.000728 | orchestrator | Wednesday 28 May 2025 17:33:39 +0000 (0:00:00.294) 0:00:00.486 ********* 2025-05-28 17:34:59.000744 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-05-28 17:34:59.000756 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-05-28 17:34:59.000767 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-05-28 17:34:59.000778 | orchestrator | 2025-05-28 17:34:59.000788 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-05-28 17:34:59.000819 | orchestrator | 2025-05-28 17:34:59.000831 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-05-28 17:34:59.000841 | orchestrator | Wednesday 28 May 2025 17:33:40 +0000 (0:00:00.570) 0:00:01.056 ********* 2025-05-28 17:34:59.000852 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:34:59.000863 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:34:59.000873 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:34:59.000884 | orchestrator | 2025-05-28 17:34:59.000894 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 17:34:59.000906 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 17:34:59.000920 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 17:34:59.000931 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 17:34:59.000971 | orchestrator | 2025-05-28 17:34:59.000982 | orchestrator | 2025-05-28 17:34:59.000993 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 17:34:59.001004 | orchestrator | Wednesday 28 May 2025 17:33:40 +0000 (0:00:00.647) 0:00:01.704 ********* 2025-05-28 17:34:59.001015 | orchestrator | =============================================================================== 2025-05-28 17:34:59.001026 | orchestrator | Waiting for Nova public port to be UP ----------------------------------- 0.65s 2025-05-28 17:34:59.001036 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.57s 2025-05-28 17:34:59.001047 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.29s 2025-05-28 17:34:59.001057 | orchestrator | 2025-05-28 17:34:59.001068 | orchestrator | 2025-05-28 17:34:59.001096 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-28 17:34:59.001107 | orchestrator | 2025-05-28 17:34:59.001117 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-28 17:34:59.001128 | orchestrator | Wednesday 28 May 2025 17:33:13 +0000 (0:00:00.252) 0:00:00.252 ********* 2025-05-28 17:34:59.001138 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:34:59.001149 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:34:59.001160 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:34:59.001171 | orchestrator | 2025-05-28 17:34:59.001182 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-28 17:34:59.001192 | orchestrator | Wednesday 28 May 2025 17:33:13 +0000 (0:00:00.272) 0:00:00.525 ********* 2025-05-28 17:34:59.001203 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-05-28 17:34:59.001214 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-05-28 17:34:59.001224 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-05-28 17:34:59.001235 | orchestrator | 2025-05-28 17:34:59.001245 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-05-28 17:34:59.001256 | orchestrator | 2025-05-28 17:34:59.001267 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-05-28 17:34:59.001277 | orchestrator | Wednesday 28 May 2025 17:33:13 +0000 (0:00:00.407) 0:00:00.933 ********* 2025-05-28 17:34:59.001288 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:34:59.001299 | orchestrator | 2025-05-28 17:34:59.001309 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-05-28 17:34:59.001320 | orchestrator | Wednesday 28 May 2025 17:33:14 +0000 (0:00:00.503) 0:00:01.436 ********* 2025-05-28 17:34:59.001331 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-05-28 17:34:59.001342 | orchestrator | 2025-05-28 17:34:59.001352 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-05-28 17:34:59.001363 | orchestrator | Wednesday 28 May 2025 17:33:17 +0000 (0:00:03.511) 0:00:04.948 ********* 2025-05-28 17:34:59.001373 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-05-28 17:34:59.001384 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-05-28 17:34:59.001395 | orchestrator | 2025-05-28 17:34:59.001406 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-05-28 17:34:59.001416 | orchestrator | Wednesday 28 May 2025 17:33:24 +0000 (0:00:06.690) 0:00:11.639 ********* 2025-05-28 17:34:59.001427 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-28 17:34:59.001438 | orchestrator | 2025-05-28 17:34:59.001448 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-05-28 17:34:59.001459 | orchestrator | Wednesday 28 May 2025 17:33:27 +0000 (0:00:03.504) 0:00:15.143 ********* 2025-05-28 17:34:59.001481 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-28 17:34:59.001492 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-05-28 17:34:59.001511 | orchestrator | 2025-05-28 17:34:59.001522 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-05-28 17:34:59.001533 | orchestrator | Wednesday 28 May 2025 17:33:31 +0000 (0:00:03.934) 0:00:19.078 ********* 2025-05-28 17:34:59.001544 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-28 17:34:59.001555 | orchestrator | 2025-05-28 17:34:59.001565 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-05-28 17:34:59.001576 | orchestrator | Wednesday 28 May 2025 17:33:35 +0000 (0:00:03.655) 0:00:22.734 ********* 2025-05-28 17:34:59.001587 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-05-28 17:34:59.001598 | orchestrator | 2025-05-28 17:34:59.001608 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-05-28 17:34:59.001619 | orchestrator | Wednesday 28 May 2025 17:33:39 +0000 (0:00:03.789) 0:00:26.523 ********* 2025-05-28 17:34:59.001630 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:34:59.001640 | orchestrator | 2025-05-28 17:34:59.001651 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-05-28 17:34:59.001662 | orchestrator | Wednesday 28 May 2025 17:33:42 +0000 (0:00:03.246) 0:00:29.769 ********* 2025-05-28 17:34:59.001672 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:34:59.001683 | orchestrator | 2025-05-28 17:34:59.001693 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-05-28 17:34:59.001704 | orchestrator | Wednesday 28 May 2025 17:33:46 +0000 (0:00:04.024) 0:00:33.793 ********* 2025-05-28 17:34:59.001715 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:34:59.001725 | orchestrator | 2025-05-28 17:34:59.001736 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-05-28 17:34:59.001747 | orchestrator | Wednesday 28 May 2025 17:33:50 +0000 (0:00:03.582) 0:00:37.375 ********* 2025-05-28 17:34:59.001767 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-28 17:34:59.001784 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-28 17:34:59.001796 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-28 17:34:59.001843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-28 17:34:59.001857 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-28 17:34:59.001874 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-28 17:34:59.001886 | orchestrator | 2025-05-28 17:34:59.001897 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-05-28 17:34:59.001908 | orchestrator | Wednesday 28 May 2025 17:33:51 +0000 (0:00:01.379) 0:00:38.754 ********* 2025-05-28 17:34:59.001919 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:34:59.001930 | orchestrator | 2025-05-28 17:34:59.001941 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-05-28 17:34:59.001951 | orchestrator | Wednesday 28 May 2025 17:33:51 +0000 (0:00:00.133) 0:00:38.888 ********* 2025-05-28 17:34:59.001962 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:34:59.001973 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:34:59.001984 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:34:59.001994 | orchestrator | 2025-05-28 17:34:59.002005 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-05-28 17:34:59.002070 | orchestrator | Wednesday 28 May 2025 17:33:52 +0000 (0:00:00.441) 0:00:39.329 ********* 2025-05-28 17:34:59.002085 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-28 17:34:59.002096 | orchestrator | 2025-05-28 17:34:59.002107 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-05-28 17:34:59.002126 | orchestrator | Wednesday 28 May 2025 17:33:53 +0000 (0:00:00.897) 0:00:40.227 ********* 2025-05-28 17:34:59.002137 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-28 17:34:59.002161 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-28 17:34:59.002173 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-28 17:34:59.002190 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-28 17:34:59.002202 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-28 17:34:59.002221 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-28 17:34:59.002232 | orchestrator | 2025-05-28 17:34:59.002243 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-05-28 17:34:59.002254 | orchestrator | Wednesday 28 May 2025 17:33:55 +0000 (0:00:02.372) 0:00:42.599 ********* 2025-05-28 17:34:59.002264 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:34:59.002275 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:34:59.002286 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:34:59.002297 | orchestrator | 2025-05-28 17:34:59.002307 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-05-28 17:34:59.002324 | orchestrator | Wednesday 28 May 2025 17:33:55 +0000 (0:00:00.300) 0:00:42.900 ********* 2025-05-28 17:34:59.002335 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:34:59.002346 | orchestrator | 2025-05-28 17:34:59.002357 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-05-28 17:34:59.002368 | orchestrator | Wednesday 28 May 2025 17:33:56 +0000 (0:00:00.726) 0:00:43.627 ********* 2025-05-28 17:34:59.002379 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-28 17:34:59.002395 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-28 17:34:59.002407 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-28 17:34:59.002426 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-28 17:34:59.002446 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-28 17:34:59.002458 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-28 17:34:59.002469 | orchestrator | 2025-05-28 17:34:59.002480 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-05-28 17:34:59.002491 | orchestrator | Wednesday 28 May 2025 17:33:58 +0000 (0:00:02.312) 0:00:45.939 ********* 2025-05-28 17:34:59.002508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-28 17:34:59.002534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-28 17:34:59.002546 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:34:59.002557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-28 17:34:59.002577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-28 17:34:59.002588 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:34:59.002599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-28 17:34:59.002616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-28 17:34:59.002634 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:34:59.002646 | orchestrator | 2025-05-28 17:34:59.002656 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-05-28 17:34:59.002667 | orchestrator | Wednesday 28 May 2025 17:33:59 +0000 (0:00:00.612) 0:00:46.552 ********* 2025-05-28 17:34:59.002678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-28 17:34:59.002690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-28 17:34:59.002701 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:34:59.002718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-28 17:34:59.002730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-28 17:34:59.002748 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:34:59.002765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-28 17:34:59.002776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-28 17:34:59.002787 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:34:59.002798 | orchestrator | 2025-05-28 17:34:59.002895 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-05-28 17:34:59.002907 | orchestrator | Wednesday 28 May 2025 17:34:00 +0000 (0:00:01.302) 0:00:47.854 ********* 2025-05-28 17:34:59.002926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-28 17:34:59.002938 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-28 17:34:59.002957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-28 17:34:59.002977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-28 17:34:59.002989 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-28 17:34:59.003008 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-28 17:34:59.003019 | orchestrator | 2025-05-28 17:34:59.003030 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-05-28 17:34:59.003041 | orchestrator | Wednesday 28 May 2025 17:34:02 +0000 (0:00:02.228) 0:00:50.083 ********* 2025-05-28 17:34:59.003053 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-28 17:34:59.003076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-28 17:34:59.003088 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-28 17:34:59.003099 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-28 17:34:59.003120 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-28 17:34:59.003132 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-28 17:34:59.003149 | orchestrator | 2025-05-28 17:34:59.003160 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-05-28 17:34:59.003171 | orchestrator | Wednesday 28 May 2025 17:34:07 +0000 (0:00:04.913) 0:00:54.996 ********* 2025-05-28 17:34:59.003187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-28 17:34:59.003199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-28 17:34:59.003211 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:34:59.003222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-28 17:34:59.003241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-28 17:34:59.003253 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:34:59.003264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-28 17:34:59.003288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-28 17:34:59.003299 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:34:59.003310 | orchestrator | 2025-05-28 17:34:59.003321 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-05-28 17:34:59.003332 | orchestrator | Wednesday 28 May 2025 17:34:08 +0000 (0:00:00.810) 0:00:55.806 ********* 2025-05-28 17:34:59.003343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-28 17:34:59.003361 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-28 17:34:59.003372 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-28 17:34:59.003390 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-28 17:34:59.003406 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-28 17:34:59.003418 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-28 17:34:59.003429 | orchestrator | 2025-05-28 17:34:59.003440 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-05-28 17:34:59.003451 | orchestrator | Wednesday 28 May 2025 17:34:10 +0000 (0:00:02.104) 0:00:57.911 ********* 2025-05-28 17:34:59.003462 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:34:59.003473 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:34:59.003484 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:34:59.003494 | orchestrator | 2025-05-28 17:34:59.003505 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-05-28 17:34:59.003516 | orchestrator | Wednesday 28 May 2025 17:34:10 +0000 (0:00:00.277) 0:00:58.188 ********* 2025-05-28 17:34:59.003527 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:34:59.003538 | orchestrator | 2025-05-28 17:34:59.003549 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-05-28 17:34:59.003559 | orchestrator | Wednesday 28 May 2025 17:34:12 +0000 (0:00:02.014) 0:01:00.203 ********* 2025-05-28 17:34:59.003570 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:34:59.003581 | orchestrator | 2025-05-28 17:34:59.003592 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-05-28 17:34:59.003612 | orchestrator | Wednesday 28 May 2025 17:34:15 +0000 (0:00:02.037) 0:01:02.240 ********* 2025-05-28 17:34:59.003629 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:34:59.003640 | orchestrator | 2025-05-28 17:34:59.003651 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-05-28 17:34:59.003662 | orchestrator | Wednesday 28 May 2025 17:34:30 +0000 (0:00:15.769) 0:01:18.010 ********* 2025-05-28 17:34:59.003672 | orchestrator | 2025-05-28 17:34:59.003683 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-05-28 17:34:59.003694 | orchestrator | Wednesday 28 May 2025 17:34:30 +0000 (0:00:00.067) 0:01:18.077 ********* 2025-05-28 17:34:59.003705 | orchestrator | 2025-05-28 17:34:59.003715 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-05-28 17:34:59.003726 | orchestrator | Wednesday 28 May 2025 17:34:30 +0000 (0:00:00.059) 0:01:18.137 ********* 2025-05-28 17:34:59.003736 | orchestrator | 2025-05-28 17:34:59.003747 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-05-28 17:34:59.003758 | orchestrator | Wednesday 28 May 2025 17:34:30 +0000 (0:00:00.077) 0:01:18.214 ********* 2025-05-28 17:34:59.003769 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:34:59.003779 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:34:59.003790 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:34:59.003820 | orchestrator | 2025-05-28 17:34:59.003832 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-05-28 17:34:59.003843 | orchestrator | Wednesday 28 May 2025 17:34:46 +0000 (0:00:15.382) 0:01:33.596 ********* 2025-05-28 17:34:59.003854 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:34:59.003864 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:34:59.003875 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:34:59.003886 | orchestrator | 2025-05-28 17:34:59.003896 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 17:34:59.003908 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-28 17:34:59.003919 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-28 17:34:59.003930 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-28 17:34:59.003941 | orchestrator | 2025-05-28 17:34:59.003952 | orchestrator | 2025-05-28 17:34:59.003963 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 17:34:59.003973 | orchestrator | Wednesday 28 May 2025 17:34:57 +0000 (0:00:10.917) 0:01:44.514 ********* 2025-05-28 17:34:59.003984 | orchestrator | =============================================================================== 2025-05-28 17:34:59.003995 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 15.77s 2025-05-28 17:34:59.004005 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 15.38s 2025-05-28 17:34:59.004022 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 10.92s 2025-05-28 17:34:59.004033 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.69s 2025-05-28 17:34:59.004044 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 4.91s 2025-05-28 17:34:59.004054 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.02s 2025-05-28 17:34:59.004065 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.93s 2025-05-28 17:34:59.004076 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.79s 2025-05-28 17:34:59.004086 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.66s 2025-05-28 17:34:59.004097 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.58s 2025-05-28 17:34:59.004115 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.51s 2025-05-28 17:34:59.004126 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.50s 2025-05-28 17:34:59.004137 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.25s 2025-05-28 17:34:59.004147 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.37s 2025-05-28 17:34:59.004158 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.31s 2025-05-28 17:34:59.004169 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.23s 2025-05-28 17:34:59.004179 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.10s 2025-05-28 17:34:59.004190 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.04s 2025-05-28 17:34:59.004201 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.01s 2025-05-28 17:34:59.004211 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.38s 2025-05-28 17:34:59.004222 | orchestrator | 2025-05-28 17:34:59 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:34:59.004233 | orchestrator | 2025-05-28 17:34:59 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:35:02.058755 | orchestrator | 2025-05-28 17:35:02 | INFO  | Task ef03f1a9-5db9-4885-b01e-f3b1509baaf4 is in state STARTED 2025-05-28 17:35:02.064555 | orchestrator | 2025-05-28 17:35:02 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:35:02.064628 | orchestrator | 2025-05-28 17:35:02 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:35:02.064652 | orchestrator | 2025-05-28 17:35:02 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:35:05.103243 | orchestrator | 2025-05-28 17:35:05 | INFO  | Task ef03f1a9-5db9-4885-b01e-f3b1509baaf4 is in state STARTED 2025-05-28 17:35:05.103723 | orchestrator | 2025-05-28 17:35:05 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:35:05.104543 | orchestrator | 2025-05-28 17:35:05 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:35:05.104723 | orchestrator | 2025-05-28 17:35:05 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:35:08.133483 | orchestrator | 2025-05-28 17:35:08 | INFO  | Task ef03f1a9-5db9-4885-b01e-f3b1509baaf4 is in state STARTED 2025-05-28 17:35:08.134128 | orchestrator | 2025-05-28 17:35:08 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:35:08.134856 | orchestrator | 2025-05-28 17:35:08 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:35:08.134893 | orchestrator | 2025-05-28 17:35:08 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:35:11.165734 | orchestrator | 2025-05-28 17:35:11 | INFO  | Task ef03f1a9-5db9-4885-b01e-f3b1509baaf4 is in state STARTED 2025-05-28 17:35:11.165926 | orchestrator | 2025-05-28 17:35:11 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:35:11.167235 | orchestrator | 2025-05-28 17:35:11 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:35:11.167325 | orchestrator | 2025-05-28 17:35:11 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:35:14.213842 | orchestrator | 2025-05-28 17:35:14 | INFO  | Task ef03f1a9-5db9-4885-b01e-f3b1509baaf4 is in state STARTED 2025-05-28 17:35:14.214448 | orchestrator | 2025-05-28 17:35:14 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:35:14.216706 | orchestrator | 2025-05-28 17:35:14 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:35:14.216822 | orchestrator | 2025-05-28 17:35:14 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:35:17.262406 | orchestrator | 2025-05-28 17:35:17 | INFO  | Task ef03f1a9-5db9-4885-b01e-f3b1509baaf4 is in state STARTED 2025-05-28 17:35:17.265063 | orchestrator | 2025-05-28 17:35:17 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:35:17.267353 | orchestrator | 2025-05-28 17:35:17 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:35:17.267399 | orchestrator | 2025-05-28 17:35:17 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:35:20.313070 | orchestrator | 2025-05-28 17:35:20 | INFO  | Task ef03f1a9-5db9-4885-b01e-f3b1509baaf4 is in state STARTED 2025-05-28 17:35:20.313625 | orchestrator | 2025-05-28 17:35:20 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:35:20.315553 | orchestrator | 2025-05-28 17:35:20 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:35:20.315578 | orchestrator | 2025-05-28 17:35:20 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:35:23.369918 | orchestrator | 2025-05-28 17:35:23 | INFO  | Task ef03f1a9-5db9-4885-b01e-f3b1509baaf4 is in state STARTED 2025-05-28 17:35:23.371310 | orchestrator | 2025-05-28 17:35:23 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:35:23.373201 | orchestrator | 2025-05-28 17:35:23 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:35:23.373227 | orchestrator | 2025-05-28 17:35:23 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:35:26.422289 | orchestrator | 2025-05-28 17:35:26 | INFO  | Task ef03f1a9-5db9-4885-b01e-f3b1509baaf4 is in state STARTED 2025-05-28 17:35:26.423724 | orchestrator | 2025-05-28 17:35:26 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:35:26.425915 | orchestrator | 2025-05-28 17:35:26 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:35:26.425931 | orchestrator | 2025-05-28 17:35:26 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:35:29.488065 | orchestrator | 2025-05-28 17:35:29 | INFO  | Task ef03f1a9-5db9-4885-b01e-f3b1509baaf4 is in state STARTED 2025-05-28 17:35:29.488198 | orchestrator | 2025-05-28 17:35:29 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:35:29.488213 | orchestrator | 2025-05-28 17:35:29 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:35:29.488225 | orchestrator | 2025-05-28 17:35:29 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:35:32.531461 | orchestrator | 2025-05-28 17:35:32 | INFO  | Task ef03f1a9-5db9-4885-b01e-f3b1509baaf4 is in state STARTED 2025-05-28 17:35:32.532923 | orchestrator | 2025-05-28 17:35:32 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:35:32.534950 | orchestrator | 2025-05-28 17:35:32 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:35:32.534979 | orchestrator | 2025-05-28 17:35:32 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:35:35.583261 | orchestrator | 2025-05-28 17:35:35 | INFO  | Task ef03f1a9-5db9-4885-b01e-f3b1509baaf4 is in state STARTED 2025-05-28 17:35:35.584866 | orchestrator | 2025-05-28 17:35:35 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:35:35.586448 | orchestrator | 2025-05-28 17:35:35 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:35:35.586775 | orchestrator | 2025-05-28 17:35:35 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:35:38.631651 | orchestrator | 2025-05-28 17:35:38 | INFO  | Task ef03f1a9-5db9-4885-b01e-f3b1509baaf4 is in state STARTED 2025-05-28 17:35:38.632699 | orchestrator | 2025-05-28 17:35:38 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:35:38.632883 | orchestrator | 2025-05-28 17:35:38 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:35:38.632900 | orchestrator | 2025-05-28 17:35:38 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:35:41.680519 | orchestrator | 2025-05-28 17:35:41 | INFO  | Task ef03f1a9-5db9-4885-b01e-f3b1509baaf4 is in state SUCCESS 2025-05-28 17:35:41.681546 | orchestrator | 2025-05-28 17:35:41.681589 | orchestrator | 2025-05-28 17:35:41.681603 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-28 17:35:41.681863 | orchestrator | 2025-05-28 17:35:41.681878 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-28 17:35:41.681890 | orchestrator | Wednesday 28 May 2025 17:33:29 +0000 (0:00:00.225) 0:00:00.225 ********* 2025-05-28 17:35:41.682542 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:35:41.682568 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:35:41.682587 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:35:41.682604 | orchestrator | 2025-05-28 17:35:41.682623 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-28 17:35:41.682641 | orchestrator | Wednesday 28 May 2025 17:33:29 +0000 (0:00:00.238) 0:00:00.464 ********* 2025-05-28 17:35:41.682683 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-05-28 17:35:41.682704 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-05-28 17:35:41.682722 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-05-28 17:35:41.682741 | orchestrator | 2025-05-28 17:35:41.682788 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-05-28 17:35:41.682808 | orchestrator | 2025-05-28 17:35:41.682828 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-05-28 17:35:41.682839 | orchestrator | Wednesday 28 May 2025 17:33:29 +0000 (0:00:00.273) 0:00:00.737 ********* 2025-05-28 17:35:41.682851 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:35:41.682863 | orchestrator | 2025-05-28 17:35:41.682874 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-05-28 17:35:41.682885 | orchestrator | Wednesday 28 May 2025 17:33:29 +0000 (0:00:00.372) 0:00:01.109 ********* 2025-05-28 17:35:41.682900 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-28 17:35:41.682917 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-28 17:35:41.682929 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-28 17:35:41.682966 | orchestrator | 2025-05-28 17:35:41.682978 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-05-28 17:35:41.682989 | orchestrator | Wednesday 28 May 2025 17:33:30 +0000 (0:00:00.642) 0:00:01.751 ********* 2025-05-28 17:35:41.683000 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-05-28 17:35:41.683087 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-05-28 17:35:41.683099 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-28 17:35:41.683110 | orchestrator | 2025-05-28 17:35:41.683121 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-05-28 17:35:41.683132 | orchestrator | Wednesday 28 May 2025 17:33:31 +0000 (0:00:00.766) 0:00:02.517 ********* 2025-05-28 17:35:41.683143 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:35:41.683154 | orchestrator | 2025-05-28 17:35:41.683235 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-05-28 17:35:41.683247 | orchestrator | Wednesday 28 May 2025 17:33:32 +0000 (0:00:00.636) 0:00:03.154 ********* 2025-05-28 17:35:41.683306 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-28 17:35:41.683322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-28 17:35:41.683333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-28 17:35:41.683345 | orchestrator | 2025-05-28 17:35:41.683355 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-05-28 17:35:41.683366 | orchestrator | Wednesday 28 May 2025 17:33:33 +0000 (0:00:01.309) 0:00:04.463 ********* 2025-05-28 17:35:41.683387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-28 17:35:41.683399 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:35:41.683411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-28 17:35:41.683422 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:35:41.683465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-28 17:35:41.683479 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:35:41.683490 | orchestrator | 2025-05-28 17:35:41.683501 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-05-28 17:35:41.683511 | orchestrator | Wednesday 28 May 2025 17:33:33 +0000 (0:00:00.371) 0:00:04.835 ********* 2025-05-28 17:35:41.683528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-28 17:35:41.683539 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:35:41.683551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-28 17:35:41.683569 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:35:41.683581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-28 17:35:41.683592 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:35:41.683603 | orchestrator | 2025-05-28 17:35:41.683613 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-05-28 17:35:41.683624 | orchestrator | Wednesday 28 May 2025 17:33:34 +0000 (0:00:00.804) 0:00:05.639 ********* 2025-05-28 17:35:41.683635 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-28 17:35:41.683647 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-28 17:35:41.683690 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-28 17:35:41.683704 | orchestrator | 2025-05-28 17:35:41.683721 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-05-28 17:35:41.683732 | orchestrator | Wednesday 28 May 2025 17:33:35 +0000 (0:00:01.315) 0:00:06.955 ********* 2025-05-28 17:35:41.683808 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-28 17:35:41.683829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-28 17:35:41.683841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-28 17:35:41.683852 | orchestrator | 2025-05-28 17:35:41.683863 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-05-28 17:35:41.683876 | orchestrator | Wednesday 28 May 2025 17:33:37 +0000 (0:00:01.348) 0:00:08.303 ********* 2025-05-28 17:35:41.683888 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:35:41.683900 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:35:41.683912 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:35:41.683924 | orchestrator | 2025-05-28 17:35:41.683937 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-05-28 17:35:41.683949 | orchestrator | Wednesday 28 May 2025 17:33:37 +0000 (0:00:00.517) 0:00:08.821 ********* 2025-05-28 17:35:41.683962 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-05-28 17:35:41.683974 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-05-28 17:35:41.683986 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-05-28 17:35:41.683999 | orchestrator | 2025-05-28 17:35:41.684011 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-05-28 17:35:41.684023 | orchestrator | Wednesday 28 May 2025 17:33:38 +0000 (0:00:01.305) 0:00:10.127 ********* 2025-05-28 17:35:41.684035 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-05-28 17:35:41.684048 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-05-28 17:35:41.684060 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-05-28 17:35:41.684072 | orchestrator | 2025-05-28 17:35:41.684084 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-05-28 17:35:41.684096 | orchestrator | Wednesday 28 May 2025 17:33:40 +0000 (0:00:01.213) 0:00:11.340 ********* 2025-05-28 17:35:41.684142 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-28 17:35:41.684157 | orchestrator | 2025-05-28 17:35:41.684169 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-05-28 17:35:41.684181 | orchestrator | Wednesday 28 May 2025 17:33:40 +0000 (0:00:00.751) 0:00:12.091 ********* 2025-05-28 17:35:41.684193 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-05-28 17:35:41.684205 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-05-28 17:35:41.684216 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:35:41.684227 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:35:41.684244 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:35:41.684255 | orchestrator | 2025-05-28 17:35:41.684266 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-05-28 17:35:41.684282 | orchestrator | Wednesday 28 May 2025 17:33:41 +0000 (0:00:00.655) 0:00:12.747 ********* 2025-05-28 17:35:41.684293 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:35:41.684304 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:35:41.684314 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:35:41.684325 | orchestrator | 2025-05-28 17:35:41.684335 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-05-28 17:35:41.684346 | orchestrator | Wednesday 28 May 2025 17:33:42 +0000 (0:00:00.502) 0:00:13.249 ********* 2025-05-28 17:35:41.684358 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1079528, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.973429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.684370 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1079528, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.973429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.684382 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1079528, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.973429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.684393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1079523, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.967429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.684434 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1079523, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.967429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.684460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1079523, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.967429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.684471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1079520, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.9654288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.684482 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1079520, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.9654288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.684493 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1079520, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.9654288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.684505 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1079526, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.970429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.684516 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1079526, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.970429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.684566 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1079526, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.970429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.684585 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1079516, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.960429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.684596 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1079516, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.960429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.684607 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1079516, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.960429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.684618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1079521, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.9654288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.684630 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1079521, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.9654288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.684669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1079521, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.9654288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.684702 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1079525, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.969429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.684715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1079525, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.969429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.684727 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1079525, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.969429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.684738 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1079515, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.9584289, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.684793 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1079515, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.9584289, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.684805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1079515, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.9584289, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.684869 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1079509, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.951429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.684883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1079509, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.951429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.684895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1079509, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.951429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.684906 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1079517, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.962429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.684917 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1079517, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.962429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.684928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1079517, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.962429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.684982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1079512, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.9554288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.685001 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1079512, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.9554288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.685012 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1079512, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.9554288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.685023 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1079524, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.969429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.685034 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1079524, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.969429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.685045 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1079524, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.969429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.685069 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1079518, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.963429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.685086 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1079518, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.963429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.685098 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1079518, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.963429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.685109 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1079527, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.970429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.685120 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1079527, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.970429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.685131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1079527, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.970429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.685149 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1079514, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.9584289, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.685170 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1079514, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.9584289, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.685187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1079514, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.9584289, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.685199 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1079522, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.966429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.685216 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1079522, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.966429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.685235 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1079522, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.966429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.685264 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1079510, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.954429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.685294 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1079510, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.954429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.685321 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1079510, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.954429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.685336 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1079513, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.956429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.685356 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1079513, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.956429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.685375 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1079513, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.956429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.685402 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1079519, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.964429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.685421 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1079519, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.964429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.685456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1079519, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.964429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.685477 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1079549, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.994429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.685496 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1079549, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.994429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.685513 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1079549, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.994429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.685542 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1079543, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.9844291, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.685562 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1079543, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.9844291, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.685591 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1079543, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.9844291, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.685625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1079530, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.975429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.685645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1079530, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.975429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.685663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1079530, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.975429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.685694 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1079572, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.001429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.685715 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1079572, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.001429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.685770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1079572, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.001429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.685793 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1079531, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.975429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.685804 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1079531, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.975429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.685816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1079531, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.975429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.685835 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1079566, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.9974291, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.685846 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1079566, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.9974291, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.685865 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1079566, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.9974291, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.685882 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1079577, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.004429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.685893 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1079577, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.004429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.685905 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1079577, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.004429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.685923 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1079557, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.995429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.685934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1079557, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.995429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.685952 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1079557, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.995429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.685968 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1079563, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.9974291, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.685980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1079563, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.9974291, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.685991 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1079563, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.9974291, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.686009 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1079532, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.976429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.686057 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1079532, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.976429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.686090 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1079532, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.976429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.686117 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1079544, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.985429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.686138 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1079544, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.985429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.686157 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1079544, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.985429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.686188 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1079585, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0064292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.686209 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1079585, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0064292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.686228 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1079585, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0064292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.686266 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1079570, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.9994292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.686287 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1079570, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.9994292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.686306 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1079570, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.9994292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.686339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1079534, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.979429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.686358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1079534, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.979429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.686378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1079534, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.979429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.686409 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1079533, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.977429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.686435 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1079533, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.977429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.686456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1079533, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.977429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.686486 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1079536, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.980429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.686506 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1079536, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.980429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.686521 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1079536, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.980429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.686543 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1079539, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.9844291, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.686570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1079539, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.9844291, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.686589 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1079539, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.9844291, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.686626 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1079545, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.985429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.686645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1079545, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.985429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.686665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1079545, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.985429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.686693 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1079561, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.9964292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.686719 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1079561, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.9964292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.686739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1079561, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.9964292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.686788 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1079546, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.986429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.686799 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1079546, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.986429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.686811 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1079546, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450420.986429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.686822 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1079596, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0074291, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.686845 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1079596, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0074291, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.686857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1079596, 'dev': 204, 'nlink': 1, 'atime': 1748390523.0, 'mtime': 1748390523.0, 'ctime': 1748450421.0074291, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 17:35:41.686877 | orchestrator | 2025-05-28 17:35:41.686889 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-05-28 17:35:41.686900 | orchestrator | Wednesday 28 May 2025 17:34:18 +0000 (0:00:36.350) 0:00:49.599 ********* 2025-05-28 17:35:41.686915 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-28 17:35:41.686935 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-28 17:35:41.686955 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-28 17:35:41.686973 | orchestrator | 2025-05-28 17:35:41.686984 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-05-28 17:35:41.686995 | orchestrator | Wednesday 28 May 2025 17:34:19 +0000 (0:00:01.070) 0:00:50.670 ********* 2025-05-28 17:35:41.687006 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:35:41.687017 | orchestrator | 2025-05-28 17:35:41.687027 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-05-28 17:35:41.687038 | orchestrator | Wednesday 28 May 2025 17:34:21 +0000 (0:00:02.307) 0:00:52.978 ********* 2025-05-28 17:35:41.687048 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:35:41.687059 | orchestrator | 2025-05-28 17:35:41.687070 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-05-28 17:35:41.687080 | orchestrator | Wednesday 28 May 2025 17:34:24 +0000 (0:00:02.510) 0:00:55.488 ********* 2025-05-28 17:35:41.687091 | orchestrator | 2025-05-28 17:35:41.687101 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-05-28 17:35:41.687112 | orchestrator | Wednesday 28 May 2025 17:34:24 +0000 (0:00:00.058) 0:00:55.546 ********* 2025-05-28 17:35:41.687123 | orchestrator | 2025-05-28 17:35:41.687147 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-05-28 17:35:41.687165 | orchestrator | Wednesday 28 May 2025 17:34:24 +0000 (0:00:00.064) 0:00:55.611 ********* 2025-05-28 17:35:41.687176 | orchestrator | 2025-05-28 17:35:41.687187 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-05-28 17:35:41.687198 | orchestrator | Wednesday 28 May 2025 17:34:24 +0000 (0:00:00.063) 0:00:55.674 ********* 2025-05-28 17:35:41.687208 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:35:41.687219 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:35:41.687230 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:35:41.687241 | orchestrator | 2025-05-28 17:35:41.687251 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-05-28 17:35:41.687267 | orchestrator | Wednesday 28 May 2025 17:34:26 +0000 (0:00:01.859) 0:00:57.534 ********* 2025-05-28 17:35:41.687278 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:35:41.687289 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:35:41.687300 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-05-28 17:35:41.687311 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-05-28 17:35:41.687321 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-05-28 17:35:41.687332 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:35:41.687344 | orchestrator | 2025-05-28 17:35:41.687354 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-05-28 17:35:41.687365 | orchestrator | Wednesday 28 May 2025 17:35:04 +0000 (0:00:38.342) 0:01:35.876 ********* 2025-05-28 17:35:41.687376 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:35:41.687386 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:35:41.687397 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:35:41.687407 | orchestrator | 2025-05-28 17:35:41.687418 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-05-28 17:35:41.687429 | orchestrator | Wednesday 28 May 2025 17:35:35 +0000 (0:00:30.505) 0:02:06.382 ********* 2025-05-28 17:35:41.687439 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:35:41.687450 | orchestrator | 2025-05-28 17:35:41.687461 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-05-28 17:35:41.687471 | orchestrator | Wednesday 28 May 2025 17:35:37 +0000 (0:00:02.354) 0:02:08.737 ********* 2025-05-28 17:35:41.687490 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:35:41.687509 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:35:41.687527 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:35:41.687546 | orchestrator | 2025-05-28 17:35:41.687563 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-05-28 17:35:41.687582 | orchestrator | Wednesday 28 May 2025 17:35:37 +0000 (0:00:00.298) 0:02:09.036 ********* 2025-05-28 17:35:41.687601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-05-28 17:35:41.687623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-05-28 17:35:41.687642 | orchestrator | 2025-05-28 17:35:41.687661 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-05-28 17:35:41.687680 | orchestrator | Wednesday 28 May 2025 17:35:40 +0000 (0:00:02.391) 0:02:11.427 ********* 2025-05-28 17:35:41.687728 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:35:41.687813 | orchestrator | 2025-05-28 17:35:41.687835 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 17:35:41.687853 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-28 17:35:41.687877 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-28 17:35:41.687888 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-28 17:35:41.687899 | orchestrator | 2025-05-28 17:35:41.687910 | orchestrator | 2025-05-28 17:35:41.687921 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 17:35:41.687931 | orchestrator | Wednesday 28 May 2025 17:35:40 +0000 (0:00:00.270) 0:02:11.698 ********* 2025-05-28 17:35:41.687942 | orchestrator | =============================================================================== 2025-05-28 17:35:41.687953 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 38.34s 2025-05-28 17:35:41.687963 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 36.35s 2025-05-28 17:35:41.687974 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 30.51s 2025-05-28 17:35:41.687984 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.51s 2025-05-28 17:35:41.687995 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.39s 2025-05-28 17:35:41.688015 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.35s 2025-05-28 17:35:41.688026 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.31s 2025-05-28 17:35:41.688036 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.86s 2025-05-28 17:35:41.688047 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.35s 2025-05-28 17:35:41.688058 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.32s 2025-05-28 17:35:41.688068 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.31s 2025-05-28 17:35:41.688079 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.31s 2025-05-28 17:35:41.688097 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.21s 2025-05-28 17:35:41.688108 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.07s 2025-05-28 17:35:41.688118 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.80s 2025-05-28 17:35:41.688129 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.77s 2025-05-28 17:35:41.688140 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.75s 2025-05-28 17:35:41.688150 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.66s 2025-05-28 17:35:41.688161 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.64s 2025-05-28 17:35:41.688172 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.64s 2025-05-28 17:35:41.688182 | orchestrator | 2025-05-28 17:35:41 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:35:41.688193 | orchestrator | 2025-05-28 17:35:41 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:35:41.688204 | orchestrator | 2025-05-28 17:35:41 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:35:44.734717 | orchestrator | 2025-05-28 17:35:44 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state STARTED 2025-05-28 17:35:44.734977 | orchestrator | 2025-05-28 17:35:44 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:35:44.734995 | orchestrator | 2025-05-28 17:35:44 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:35:47.782291 | orchestrator | 2025-05-28 17:35:47 | INFO  | Task c1daa203-f755-4254-b626-9a23cffbb894 is in state SUCCESS 2025-05-28 17:35:47.783431 | orchestrator | 2025-05-28 17:35:47.783469 | orchestrator | 2025-05-28 17:35:47.783481 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-28 17:35:47.783493 | orchestrator | 2025-05-28 17:35:47.783504 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-05-28 17:35:47.783515 | orchestrator | Wednesday 28 May 2025 17:26:51 +0000 (0:00:00.206) 0:00:00.206 ********* 2025-05-28 17:35:47.783526 | orchestrator | changed: [testbed-manager] 2025-05-28 17:35:47.783538 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:35:47.783549 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:35:47.783559 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:35:47.783569 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:35:47.783580 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:35:47.783591 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:35:47.783601 | orchestrator | 2025-05-28 17:35:47.783612 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-28 17:35:47.783623 | orchestrator | Wednesday 28 May 2025 17:26:52 +0000 (0:00:00.979) 0:00:01.186 ********* 2025-05-28 17:35:47.783634 | orchestrator | changed: [testbed-manager] 2025-05-28 17:35:47.783644 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:35:47.783655 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:35:47.783665 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:35:47.783676 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:35:47.783687 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:35:47.783698 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:35:47.783708 | orchestrator | 2025-05-28 17:35:47.783719 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-28 17:35:47.783729 | orchestrator | Wednesday 28 May 2025 17:26:53 +0000 (0:00:00.981) 0:00:02.167 ********* 2025-05-28 17:35:47.783764 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-05-28 17:35:47.783777 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-05-28 17:35:47.783787 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-05-28 17:35:47.783798 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-05-28 17:35:47.783808 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-05-28 17:35:47.783819 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-05-28 17:35:47.783874 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-05-28 17:35:47.783888 | orchestrator | 2025-05-28 17:35:47.783899 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-05-28 17:35:47.783910 | orchestrator | 2025-05-28 17:35:47.783921 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-05-28 17:35:47.783932 | orchestrator | Wednesday 28 May 2025 17:26:54 +0000 (0:00:00.883) 0:00:03.051 ********* 2025-05-28 17:35:47.783942 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:35:47.783953 | orchestrator | 2025-05-28 17:35:47.783964 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-05-28 17:35:47.783974 | orchestrator | Wednesday 28 May 2025 17:26:54 +0000 (0:00:00.643) 0:00:03.694 ********* 2025-05-28 17:35:47.783985 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-05-28 17:35:47.783996 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-05-28 17:35:47.784007 | orchestrator | 2025-05-28 17:35:47.784038 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-05-28 17:35:47.784051 | orchestrator | Wednesday 28 May 2025 17:26:58 +0000 (0:00:03.591) 0:00:07.286 ********* 2025-05-28 17:35:47.784063 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-28 17:35:47.784075 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-28 17:35:47.784086 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:35:47.784097 | orchestrator | 2025-05-28 17:35:47.784107 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-05-28 17:35:47.784119 | orchestrator | Wednesday 28 May 2025 17:27:02 +0000 (0:00:03.749) 0:00:11.035 ********* 2025-05-28 17:35:47.784144 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:35:47.784155 | orchestrator | 2025-05-28 17:35:47.784493 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-05-28 17:35:47.784528 | orchestrator | Wednesday 28 May 2025 17:27:03 +0000 (0:00:00.795) 0:00:11.831 ********* 2025-05-28 17:35:47.784540 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:35:47.784551 | orchestrator | 2025-05-28 17:35:47.784562 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-05-28 17:35:47.784572 | orchestrator | Wednesday 28 May 2025 17:27:04 +0000 (0:00:01.495) 0:00:13.327 ********* 2025-05-28 17:35:47.784583 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:35:47.784594 | orchestrator | 2025-05-28 17:35:47.784604 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-28 17:35:47.784615 | orchestrator | Wednesday 28 May 2025 17:27:07 +0000 (0:00:03.042) 0:00:16.369 ********* 2025-05-28 17:35:47.784625 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:35:47.784636 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:35:47.784646 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:35:47.784657 | orchestrator | 2025-05-28 17:35:47.784667 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-05-28 17:35:47.784678 | orchestrator | Wednesday 28 May 2025 17:27:08 +0000 (0:00:00.461) 0:00:16.830 ********* 2025-05-28 17:35:47.784688 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:35:47.784699 | orchestrator | 2025-05-28 17:35:47.784710 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-05-28 17:35:47.784720 | orchestrator | Wednesday 28 May 2025 17:27:37 +0000 (0:00:29.511) 0:00:46.341 ********* 2025-05-28 17:35:47.784731 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:35:47.784766 | orchestrator | 2025-05-28 17:35:47.784862 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-05-28 17:35:47.784877 | orchestrator | Wednesday 28 May 2025 17:27:50 +0000 (0:00:13.389) 0:00:59.731 ********* 2025-05-28 17:35:47.784888 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:35:47.784899 | orchestrator | 2025-05-28 17:35:47.784909 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-05-28 17:35:47.784920 | orchestrator | Wednesday 28 May 2025 17:28:01 +0000 (0:00:10.836) 0:01:10.568 ********* 2025-05-28 17:35:47.785959 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:35:47.785987 | orchestrator | 2025-05-28 17:35:47.785999 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-05-28 17:35:47.786010 | orchestrator | Wednesday 28 May 2025 17:28:03 +0000 (0:00:01.864) 0:01:12.432 ********* 2025-05-28 17:35:47.786069 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:35:47.786081 | orchestrator | 2025-05-28 17:35:47.786092 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-28 17:35:47.786103 | orchestrator | Wednesday 28 May 2025 17:28:05 +0000 (0:00:01.676) 0:01:14.110 ********* 2025-05-28 17:35:47.786113 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:35:47.786124 | orchestrator | 2025-05-28 17:35:47.786135 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-05-28 17:35:47.786146 | orchestrator | Wednesday 28 May 2025 17:28:07 +0000 (0:00:01.817) 0:01:15.927 ********* 2025-05-28 17:35:47.786156 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:35:47.786167 | orchestrator | 2025-05-28 17:35:47.786178 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-05-28 17:35:47.786189 | orchestrator | Wednesday 28 May 2025 17:28:24 +0000 (0:00:17.264) 0:01:33.192 ********* 2025-05-28 17:35:47.786199 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:35:47.786210 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:35:47.786220 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:35:47.786231 | orchestrator | 2025-05-28 17:35:47.786242 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-05-28 17:35:47.786252 | orchestrator | 2025-05-28 17:35:47.786277 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-05-28 17:35:47.786288 | orchestrator | Wednesday 28 May 2025 17:28:24 +0000 (0:00:00.377) 0:01:33.570 ********* 2025-05-28 17:35:47.786299 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:35:47.786310 | orchestrator | 2025-05-28 17:35:47.786321 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-05-28 17:35:47.786331 | orchestrator | Wednesday 28 May 2025 17:28:25 +0000 (0:00:01.073) 0:01:34.643 ********* 2025-05-28 17:35:47.786342 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:35:47.786352 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:35:47.786363 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:35:47.786374 | orchestrator | 2025-05-28 17:35:47.786384 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-05-28 17:35:47.786395 | orchestrator | Wednesday 28 May 2025 17:28:27 +0000 (0:00:02.010) 0:01:36.654 ********* 2025-05-28 17:35:47.786406 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:35:47.786417 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:35:47.786427 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:35:47.786438 | orchestrator | 2025-05-28 17:35:47.786448 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-05-28 17:35:47.786459 | orchestrator | Wednesday 28 May 2025 17:28:29 +0000 (0:00:01.951) 0:01:38.605 ********* 2025-05-28 17:35:47.786470 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:35:47.786480 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:35:47.786491 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:35:47.786501 | orchestrator | 2025-05-28 17:35:47.786512 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-05-28 17:35:47.786523 | orchestrator | Wednesday 28 May 2025 17:28:30 +0000 (0:00:00.303) 0:01:38.908 ********* 2025-05-28 17:35:47.786533 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-28 17:35:47.786544 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:35:47.786555 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-28 17:35:47.786565 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:35:47.786576 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-05-28 17:35:47.786587 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-05-28 17:35:47.786598 | orchestrator | 2025-05-28 17:35:47.786610 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-05-28 17:35:47.786630 | orchestrator | Wednesday 28 May 2025 17:28:39 +0000 (0:00:09.442) 0:01:48.351 ********* 2025-05-28 17:35:47.786643 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:35:47.786656 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:35:47.786668 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:35:47.786679 | orchestrator | 2025-05-28 17:35:47.786691 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-05-28 17:35:47.786703 | orchestrator | Wednesday 28 May 2025 17:28:40 +0000 (0:00:01.273) 0:01:49.624 ********* 2025-05-28 17:35:47.786715 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-05-28 17:35:47.786727 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:35:47.786783 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-28 17:35:47.786797 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:35:47.786809 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-28 17:35:47.786821 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:35:47.786833 | orchestrator | 2025-05-28 17:35:47.786845 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-05-28 17:35:47.786856 | orchestrator | Wednesday 28 May 2025 17:28:42 +0000 (0:00:02.179) 0:01:51.804 ********* 2025-05-28 17:35:47.786868 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:35:47.786880 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:35:47.786892 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:35:47.786903 | orchestrator | 2025-05-28 17:35:47.786915 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-05-28 17:35:47.786935 | orchestrator | Wednesday 28 May 2025 17:28:43 +0000 (0:00:01.013) 0:01:52.817 ********* 2025-05-28 17:35:47.786945 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:35:47.786956 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:35:47.786966 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:35:47.786977 | orchestrator | 2025-05-28 17:35:47.786988 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-05-28 17:35:47.786999 | orchestrator | Wednesday 28 May 2025 17:28:45 +0000 (0:00:01.110) 0:01:53.928 ********* 2025-05-28 17:35:47.787010 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:35:47.787020 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:35:47.787125 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:35:47.787142 | orchestrator | 2025-05-28 17:35:47.787152 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-05-28 17:35:47.787163 | orchestrator | Wednesday 28 May 2025 17:28:47 +0000 (0:00:02.714) 0:01:56.643 ********* 2025-05-28 17:35:47.787173 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:35:47.787184 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:35:47.787194 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:35:47.787205 | orchestrator | 2025-05-28 17:35:47.787216 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-05-28 17:35:47.787226 | orchestrator | Wednesday 28 May 2025 17:29:08 +0000 (0:00:20.578) 0:02:17.221 ********* 2025-05-28 17:35:47.787237 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:35:47.787247 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:35:47.787257 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:35:47.787268 | orchestrator | 2025-05-28 17:35:47.787278 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-05-28 17:35:47.787289 | orchestrator | Wednesday 28 May 2025 17:29:19 +0000 (0:00:11.029) 0:02:28.251 ********* 2025-05-28 17:35:47.787299 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:35:47.787310 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:35:47.787320 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:35:47.787331 | orchestrator | 2025-05-28 17:35:47.787341 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-05-28 17:35:47.787352 | orchestrator | Wednesday 28 May 2025 17:29:20 +0000 (0:00:00.827) 0:02:29.078 ********* 2025-05-28 17:35:47.787362 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:35:47.787373 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:35:47.787383 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:35:47.787394 | orchestrator | 2025-05-28 17:35:47.787404 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-05-28 17:35:47.787415 | orchestrator | Wednesday 28 May 2025 17:29:31 +0000 (0:00:10.862) 0:02:39.940 ********* 2025-05-28 17:35:47.787426 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:35:47.787436 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:35:47.787446 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:35:47.787457 | orchestrator | 2025-05-28 17:35:47.787467 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-05-28 17:35:47.787478 | orchestrator | Wednesday 28 May 2025 17:29:32 +0000 (0:00:01.559) 0:02:41.499 ********* 2025-05-28 17:35:47.787488 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:35:47.787499 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:35:47.787509 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:35:47.787520 | orchestrator | 2025-05-28 17:35:47.787530 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-05-28 17:35:47.787540 | orchestrator | 2025-05-28 17:35:47.787551 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-28 17:35:47.787561 | orchestrator | Wednesday 28 May 2025 17:29:32 +0000 (0:00:00.318) 0:02:41.818 ********* 2025-05-28 17:35:47.787572 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:35:47.787584 | orchestrator | 2025-05-28 17:35:47.787594 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-05-28 17:35:47.787612 | orchestrator | Wednesday 28 May 2025 17:29:33 +0000 (0:00:00.505) 0:02:42.324 ********* 2025-05-28 17:35:47.787623 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-05-28 17:35:47.787633 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-05-28 17:35:47.787644 | orchestrator | 2025-05-28 17:35:47.787654 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-05-28 17:35:47.787665 | orchestrator | Wednesday 28 May 2025 17:29:36 +0000 (0:00:03.235) 0:02:45.559 ********* 2025-05-28 17:35:47.787697 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-05-28 17:35:47.787717 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-05-28 17:35:47.787728 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-05-28 17:35:47.787763 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-05-28 17:35:47.787777 | orchestrator | 2025-05-28 17:35:47.787787 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-05-28 17:35:47.787798 | orchestrator | Wednesday 28 May 2025 17:29:43 +0000 (0:00:06.721) 0:02:52.281 ********* 2025-05-28 17:35:47.787809 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-28 17:35:47.787819 | orchestrator | 2025-05-28 17:35:47.787830 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-05-28 17:35:47.787840 | orchestrator | Wednesday 28 May 2025 17:29:46 +0000 (0:00:03.246) 0:02:55.527 ********* 2025-05-28 17:35:47.787851 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-28 17:35:47.787861 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-05-28 17:35:47.787872 | orchestrator | 2025-05-28 17:35:47.787882 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-05-28 17:35:47.787893 | orchestrator | Wednesday 28 May 2025 17:29:50 +0000 (0:00:03.935) 0:02:59.462 ********* 2025-05-28 17:35:47.787903 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-28 17:35:47.787914 | orchestrator | 2025-05-28 17:35:47.787924 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-05-28 17:35:47.787935 | orchestrator | Wednesday 28 May 2025 17:29:53 +0000 (0:00:03.100) 0:03:02.563 ********* 2025-05-28 17:35:47.787945 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-05-28 17:35:47.787956 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-05-28 17:35:47.787966 | orchestrator | 2025-05-28 17:35:47.787977 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-05-28 17:35:47.788068 | orchestrator | Wednesday 28 May 2025 17:30:01 +0000 (0:00:07.746) 0:03:10.310 ********* 2025-05-28 17:35:47.788089 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-28 17:35:47.788158 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-28 17:35:47.788179 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-28 17:35:47.788269 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-28 17:35:47.788287 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-28 17:35:47.788299 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-28 17:35:47.788318 | orchestrator | 2025-05-28 17:35:47.788329 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-05-28 17:35:47.788341 | orchestrator | Wednesday 28 May 2025 17:30:02 +0000 (0:00:01.371) 0:03:11.681 ********* 2025-05-28 17:35:47.788351 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:35:47.788362 | orchestrator | 2025-05-28 17:35:47.788373 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-05-28 17:35:47.788383 | orchestrator | Wednesday 28 May 2025 17:30:02 +0000 (0:00:00.118) 0:03:11.800 ********* 2025-05-28 17:35:47.788394 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:35:47.788404 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:35:47.788415 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:35:47.788426 | orchestrator | 2025-05-28 17:35:47.788436 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-05-28 17:35:47.788447 | orchestrator | Wednesday 28 May 2025 17:30:03 +0000 (0:00:00.994) 0:03:12.794 ********* 2025-05-28 17:35:47.788457 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-28 17:35:47.788468 | orchestrator | 2025-05-28 17:35:47.788478 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-05-28 17:35:47.788489 | orchestrator | Wednesday 28 May 2025 17:30:05 +0000 (0:00:01.043) 0:03:13.838 ********* 2025-05-28 17:35:47.788499 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:35:47.788510 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:35:47.788520 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:35:47.788531 | orchestrator | 2025-05-28 17:35:47.788541 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-28 17:35:47.788552 | orchestrator | Wednesday 28 May 2025 17:30:05 +0000 (0:00:00.331) 0:03:14.169 ********* 2025-05-28 17:35:47.788563 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:35:47.788573 | orchestrator | 2025-05-28 17:35:47.788584 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-05-28 17:35:47.788600 | orchestrator | Wednesday 28 May 2025 17:30:06 +0000 (0:00:00.731) 0:03:14.900 ********* 2025-05-28 17:35:47.788612 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-28 17:35:47.788659 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-28 17:35:47.788702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-28 17:35:47.788721 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-28 17:35:47.788733 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-28 17:35:47.788806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-28 17:35:47.788834 | orchestrator | 2025-05-28 17:35:47.788845 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-05-28 17:35:47.788856 | orchestrator | Wednesday 28 May 2025 17:30:08 +0000 (0:00:02.635) 0:03:17.536 ********* 2025-05-28 17:35:47.788868 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-28 17:35:47.788880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 17:35:47.788892 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:35:47.788909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-28 17:35:47.788921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 17:35:47.788939 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:35:47.788982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-28 17:35:47.788997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 17:35:47.789009 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:35:47.789019 | orchestrator | 2025-05-28 17:35:47.789030 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-05-28 17:35:47.789041 | orchestrator | Wednesday 28 May 2025 17:30:09 +0000 (0:00:00.827) 0:03:18.364 ********* 2025-05-28 17:35:47.789057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-28 17:35:47.789069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 17:35:47.789087 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:35:47.789129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-28 17:35:47.789143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 17:35:47.789154 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:35:47.789166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-28 17:35:47.789182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 17:35:47.789193 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:35:47.789204 | orchestrator | 2025-05-28 17:35:47.789215 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-05-28 17:35:47.789232 | orchestrator | Wednesday 28 May 2025 17:30:11 +0000 (0:00:01.592) 0:03:19.957 ********* 2025-05-28 17:35:47.789274 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-28 17:35:47.789289 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-28 17:35:47.789306 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-28 17:35:47.789318 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-28 17:35:47.789368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-28 17:35:47.789382 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-28 17:35:47.789393 | orchestrator | 2025-05-28 17:35:47.789404 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-05-28 17:35:47.789415 | orchestrator | Wednesday 28 May 2025 17:30:13 +0000 (0:00:02.757) 0:03:22.714 ********* 2025-05-28 17:35:47.789426 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-28 17:35:47.789443 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-28 17:35:47.789492 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-28 17:35:47.789506 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-28 17:35:47.789518 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-28 17:35:47.789529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-28 17:35:47.789541 | orchestrator | 2025-05-28 17:35:47.789551 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-05-28 17:35:47.789562 | orchestrator | Wednesday 28 May 2025 17:30:22 +0000 (0:00:08.916) 0:03:31.631 ********* 2025-05-28 17:35:47.789578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-28 17:35:47.789625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 17:35:47.789638 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:35:47.789650 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-28 17:35:47.789662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 17:35:47.789673 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:35:47.789689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-28 17:35:47.789707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 17:35:47.789721 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:35:47.789800 | orchestrator | 2025-05-28 17:35:47.789825 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-05-28 17:35:47.789845 | orchestrator | Wednesday 28 May 2025 17:30:23 +0000 (0:00:00.515) 0:03:32.146 ********* 2025-05-28 17:35:47.789864 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:35:47.789882 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:35:47.789897 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:35:47.789908 | orchestrator | 2025-05-28 17:35:47.789960 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-05-28 17:35:47.789974 | orchestrator | Wednesday 28 May 2025 17:30:25 +0000 (0:00:02.552) 0:03:34.698 ********* 2025-05-28 17:35:47.789985 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:35:47.789996 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:35:47.790006 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:35:47.790047 | orchestrator | 2025-05-28 17:35:47.790060 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-05-28 17:35:47.790071 | orchestrator | Wednesday 28 May 2025 17:30:26 +0000 (0:00:00.308) 0:03:35.007 ********* 2025-05-28 17:35:47.790084 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-28 17:35:47.790108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-28 17:35:47.790163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-28 17:35:47.790176 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-28 17:35:47.790187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-28 17:35:47.790197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-28 17:35:47.790207 | orchestrator | 2025-05-28 17:35:47.790217 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-05-28 17:35:47.790227 | orchestrator | Wednesday 28 May 2025 17:30:28 +0000 (0:00:02.267) 0:03:37.274 ********* 2025-05-28 17:35:47.790243 | orchestrator | 2025-05-28 17:35:47.790253 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-05-28 17:35:47.790263 | orchestrator | Wednesday 28 May 2025 17:30:28 +0000 (0:00:00.243) 0:03:37.518 ********* 2025-05-28 17:35:47.790273 | orchestrator | 2025-05-28 17:35:47.790282 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-05-28 17:35:47.790291 | orchestrator | Wednesday 28 May 2025 17:30:28 +0000 (0:00:00.188) 0:03:37.707 ********* 2025-05-28 17:35:47.790301 | orchestrator | 2025-05-28 17:35:47.790310 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-05-28 17:35:47.790320 | orchestrator | Wednesday 28 May 2025 17:30:29 +0000 (0:00:00.372) 0:03:38.081 ********* 2025-05-28 17:35:47.790329 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:35:47.790339 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:35:47.790348 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:35:47.790358 | orchestrator | 2025-05-28 17:35:47.790367 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-05-28 17:35:47.790377 | orchestrator | Wednesday 28 May 2025 17:30:48 +0000 (0:00:19.674) 0:03:57.755 ********* 2025-05-28 17:35:47.790386 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:35:47.790401 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:35:47.790410 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:35:47.790420 | orchestrator | 2025-05-28 17:35:47.790429 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-05-28 17:35:47.790438 | orchestrator | 2025-05-28 17:35:47.790448 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-28 17:35:47.790457 | orchestrator | Wednesday 28 May 2025 17:30:55 +0000 (0:00:06.368) 0:04:04.123 ********* 2025-05-28 17:35:47.790467 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:35:47.790477 | orchestrator | 2025-05-28 17:35:47.790486 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-28 17:35:47.790496 | orchestrator | Wednesday 28 May 2025 17:30:56 +0000 (0:00:01.229) 0:04:05.353 ********* 2025-05-28 17:35:47.790505 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:35:47.790515 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:35:47.790524 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:35:47.790534 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:35:47.790543 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:35:47.790553 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:35:47.790562 | orchestrator | 2025-05-28 17:35:47.790572 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-05-28 17:35:47.790581 | orchestrator | Wednesday 28 May 2025 17:30:57 +0000 (0:00:01.208) 0:04:06.561 ********* 2025-05-28 17:35:47.790591 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:35:47.790600 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:35:47.790609 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:35:47.790619 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 17:35:47.790628 | orchestrator | 2025-05-28 17:35:47.790638 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-05-28 17:35:47.790674 | orchestrator | Wednesday 28 May 2025 17:30:58 +0000 (0:00:01.080) 0:04:07.642 ********* 2025-05-28 17:35:47.790686 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-05-28 17:35:47.790696 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-05-28 17:35:47.790705 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-05-28 17:35:47.790715 | orchestrator | 2025-05-28 17:35:47.790724 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-05-28 17:35:47.790734 | orchestrator | Wednesday 28 May 2025 17:30:59 +0000 (0:00:00.737) 0:04:08.379 ********* 2025-05-28 17:35:47.790796 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-05-28 17:35:47.790815 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-05-28 17:35:47.790825 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-05-28 17:35:47.790835 | orchestrator | 2025-05-28 17:35:47.790843 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-05-28 17:35:47.790851 | orchestrator | Wednesday 28 May 2025 17:31:00 +0000 (0:00:01.339) 0:04:09.719 ********* 2025-05-28 17:35:47.790859 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-05-28 17:35:47.790867 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:35:47.790874 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-05-28 17:35:47.790882 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:35:47.790890 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-05-28 17:35:47.790898 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:35:47.790905 | orchestrator | 2025-05-28 17:35:47.790913 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-05-28 17:35:47.790921 | orchestrator | Wednesday 28 May 2025 17:31:02 +0000 (0:00:01.102) 0:04:10.821 ********* 2025-05-28 17:35:47.790929 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-28 17:35:47.790937 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-28 17:35:47.790945 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:35:47.790953 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-28 17:35:47.790961 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-28 17:35:47.790968 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:35:47.790976 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-28 17:35:47.790984 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-28 17:35:47.790992 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:35:47.791000 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-05-28 17:35:47.791008 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-05-28 17:35:47.791016 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-05-28 17:35:47.791023 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-05-28 17:35:47.791031 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-05-28 17:35:47.791039 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-05-28 17:35:47.791047 | orchestrator | 2025-05-28 17:35:47.791055 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-05-28 17:35:47.791062 | orchestrator | Wednesday 28 May 2025 17:31:03 +0000 (0:00:01.141) 0:04:11.962 ********* 2025-05-28 17:35:47.791070 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:35:47.791078 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:35:47.791086 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:35:47.791094 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:35:47.791102 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:35:47.791109 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:35:47.791117 | orchestrator | 2025-05-28 17:35:47.791129 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-05-28 17:35:47.791137 | orchestrator | Wednesday 28 May 2025 17:31:04 +0000 (0:00:01.355) 0:04:13.317 ********* 2025-05-28 17:35:47.791145 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:35:47.791153 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:35:47.791161 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:35:47.791168 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:35:47.791176 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:35:47.791184 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:35:47.791192 | orchestrator | 2025-05-28 17:35:47.791199 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-05-28 17:35:47.791212 | orchestrator | Wednesday 28 May 2025 17:31:06 +0000 (0:00:02.221) 0:04:15.538 ********* 2025-05-28 17:35:47.791221 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-28 17:35:47.791257 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-28 17:35:47.791268 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-28 17:35:47.791277 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-28 17:35:47.791290 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-28 17:35:47.791306 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-28 17:35:47.791338 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-28 17:35:47.791348 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-28 17:35:47.791357 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-28 17:35:47.791365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-28 17:35:47.791378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-28 17:35:47.791392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-28 17:35:47.791422 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-28 17:35:47.791432 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-28 17:35:47.791440 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-28 17:35:47.791449 | orchestrator | 2025-05-28 17:35:47.791457 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-28 17:35:47.791465 | orchestrator | Wednesday 28 May 2025 17:31:11 +0000 (0:00:04.311) 0:04:19.850 ********* 2025-05-28 17:35:47.791473 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:35:47.791482 | orchestrator | 2025-05-28 17:35:47.791489 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-05-28 17:35:47.791497 | orchestrator | Wednesday 28 May 2025 17:31:12 +0000 (0:00:01.733) 0:04:21.583 ********* 2025-05-28 17:35:47.791505 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-28 17:35:47.791522 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-28 17:35:47.791552 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-28 17:35:47.791561 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-28 17:35:47.791570 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-28 17:35:47.791578 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-28 17:35:47.791586 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-28 17:35:47.791615 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-28 17:35:47.791623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-28 17:35:47.791654 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-28 17:35:47.791664 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-28 17:35:47.791672 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-28 17:35:47.791680 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-28 17:35:47.791700 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-28 17:35:47.791708 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-28 17:35:47.791716 | orchestrator | 2025-05-28 17:35:47.791724 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-05-28 17:35:47.791732 | orchestrator | Wednesday 28 May 2025 17:31:17 +0000 (0:00:04.583) 0:04:26.167 ********* 2025-05-28 17:35:47.791784 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-28 17:35:47.791796 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-28 17:35:47.791804 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-28 17:35:47.791818 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:35:47.791831 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-28 17:35:47.791840 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-28 17:35:47.791870 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-28 17:35:47.791880 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:35:47.791888 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-28 17:35:47.791897 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-28 17:35:47.791912 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-28 17:35:47.791920 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:35:47.791932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-28 17:35:47.791940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-28 17:35:47.791949 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:35:47.791980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-28 17:35:47.791990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-28 17:35:47.791998 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:35:47.792006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-28 17:35:47.792020 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-28 17:35:47.792028 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:35:47.792036 | orchestrator | 2025-05-28 17:35:47.792044 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-05-28 17:35:47.792052 | orchestrator | Wednesday 28 May 2025 17:31:20 +0000 (0:00:02.802) 0:04:28.969 ********* 2025-05-28 17:35:47.792063 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-28 17:35:47.792072 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-28 17:35:47.792104 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-28 17:35:47.792113 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:35:47.792122 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-28 17:35:47.792136 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-28 17:35:47.792144 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-28 17:35:47.792156 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:35:47.792164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-28 17:35:47.792194 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-28 17:35:47.792203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-28 17:35:47.792211 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:35:47.792220 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-28 17:35:47.792234 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-28 17:35:47.792242 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:35:47.792254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-28 17:35:47.792262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-28 17:35:47.792270 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:35:47.792278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-28 17:35:47.792309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-28 17:35:47.792318 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:35:47.792326 | orchestrator | 2025-05-28 17:35:47.792334 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-28 17:35:47.792347 | orchestrator | Wednesday 28 May 2025 17:31:23 +0000 (0:00:03.271) 0:04:32.240 ********* 2025-05-28 17:35:47.792355 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:35:47.792363 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:35:47.792371 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:35:47.792378 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 17:35:47.792386 | orchestrator | 2025-05-28 17:35:47.792394 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-05-28 17:35:47.792402 | orchestrator | Wednesday 28 May 2025 17:31:24 +0000 (0:00:01.093) 0:04:33.334 ********* 2025-05-28 17:35:47.792410 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-28 17:35:47.792418 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-28 17:35:47.792426 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-28 17:35:47.792433 | orchestrator | 2025-05-28 17:35:47.792441 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-05-28 17:35:47.792449 | orchestrator | Wednesday 28 May 2025 17:31:26 +0000 (0:00:02.153) 0:04:35.487 ********* 2025-05-28 17:35:47.792457 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-28 17:35:47.792465 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-28 17:35:47.792472 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-28 17:35:47.792480 | orchestrator | 2025-05-28 17:35:47.792488 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-05-28 17:35:47.792496 | orchestrator | Wednesday 28 May 2025 17:31:28 +0000 (0:00:02.042) 0:04:37.530 ********* 2025-05-28 17:35:47.792503 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:35:47.792511 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:35:47.792519 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:35:47.792527 | orchestrator | 2025-05-28 17:35:47.792535 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-05-28 17:35:47.792543 | orchestrator | Wednesday 28 May 2025 17:31:29 +0000 (0:00:00.970) 0:04:38.501 ********* 2025-05-28 17:35:47.792550 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:35:47.792558 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:35:47.792566 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:35:47.792574 | orchestrator | 2025-05-28 17:35:47.792582 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-05-28 17:35:47.792589 | orchestrator | Wednesday 28 May 2025 17:31:30 +0000 (0:00:00.498) 0:04:38.999 ********* 2025-05-28 17:35:47.792597 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-05-28 17:35:47.792605 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-05-28 17:35:47.792613 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-05-28 17:35:47.792621 | orchestrator | 2025-05-28 17:35:47.792628 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-05-28 17:35:47.792636 | orchestrator | Wednesday 28 May 2025 17:31:31 +0000 (0:00:01.575) 0:04:40.574 ********* 2025-05-28 17:35:47.792644 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-05-28 17:35:47.792652 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-05-28 17:35:47.792659 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-05-28 17:35:47.792667 | orchestrator | 2025-05-28 17:35:47.792675 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-05-28 17:35:47.792683 | orchestrator | Wednesday 28 May 2025 17:31:32 +0000 (0:00:01.243) 0:04:41.818 ********* 2025-05-28 17:35:47.792691 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-05-28 17:35:47.792698 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-05-28 17:35:47.792706 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-05-28 17:35:47.792714 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-05-28 17:35:47.792721 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-05-28 17:35:47.792734 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-05-28 17:35:47.792786 | orchestrator | 2025-05-28 17:35:47.792794 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-05-28 17:35:47.792803 | orchestrator | Wednesday 28 May 2025 17:31:37 +0000 (0:00:04.030) 0:04:45.848 ********* 2025-05-28 17:35:47.792810 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:35:47.792818 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:35:47.792826 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:35:47.792834 | orchestrator | 2025-05-28 17:35:47.792841 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-05-28 17:35:47.792849 | orchestrator | Wednesday 28 May 2025 17:31:37 +0000 (0:00:00.263) 0:04:46.111 ********* 2025-05-28 17:35:47.792857 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:35:47.792864 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:35:47.792872 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:35:47.792880 | orchestrator | 2025-05-28 17:35:47.792888 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-05-28 17:35:47.792896 | orchestrator | Wednesday 28 May 2025 17:31:37 +0000 (0:00:00.244) 0:04:46.356 ********* 2025-05-28 17:35:47.792903 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:35:47.792911 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:35:47.792919 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:35:47.792927 | orchestrator | 2025-05-28 17:35:47.792959 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-05-28 17:35:47.792969 | orchestrator | Wednesday 28 May 2025 17:31:38 +0000 (0:00:01.434) 0:04:47.790 ********* 2025-05-28 17:35:47.792977 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-05-28 17:35:47.792986 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-05-28 17:35:47.792994 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-05-28 17:35:47.793002 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-05-28 17:35:47.793010 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-05-28 17:35:47.793018 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-05-28 17:35:47.793026 | orchestrator | 2025-05-28 17:35:47.793034 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-05-28 17:35:47.793042 | orchestrator | Wednesday 28 May 2025 17:31:42 +0000 (0:00:03.214) 0:04:51.004 ********* 2025-05-28 17:35:47.793050 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-28 17:35:47.793057 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-28 17:35:47.793065 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-28 17:35:47.793073 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-28 17:35:47.793081 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:35:47.793089 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-28 17:35:47.793097 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:35:47.793104 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-28 17:35:47.793112 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:35:47.793120 | orchestrator | 2025-05-28 17:35:47.793128 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-05-28 17:35:47.793136 | orchestrator | Wednesday 28 May 2025 17:31:45 +0000 (0:00:03.493) 0:04:54.498 ********* 2025-05-28 17:35:47.793143 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:35:47.793151 | orchestrator | 2025-05-28 17:35:47.793159 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-05-28 17:35:47.793204 | orchestrator | Wednesday 28 May 2025 17:31:45 +0000 (0:00:00.129) 0:04:54.627 ********* 2025-05-28 17:35:47.793213 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:35:47.793221 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:35:47.793228 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:35:47.793236 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:35:47.793244 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:35:47.793252 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:35:47.793259 | orchestrator | 2025-05-28 17:35:47.793267 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-05-28 17:35:47.793275 | orchestrator | Wednesday 28 May 2025 17:31:46 +0000 (0:00:00.766) 0:04:55.394 ********* 2025-05-28 17:35:47.793283 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-28 17:35:47.793290 | orchestrator | 2025-05-28 17:35:47.793298 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-05-28 17:35:47.793306 | orchestrator | Wednesday 28 May 2025 17:31:47 +0000 (0:00:00.675) 0:04:56.069 ********* 2025-05-28 17:35:47.793313 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:35:47.793321 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:35:47.793329 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:35:47.793336 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:35:47.793347 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:35:47.793355 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:35:47.793363 | orchestrator | 2025-05-28 17:35:47.793371 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-05-28 17:35:47.793378 | orchestrator | Wednesday 28 May 2025 17:31:47 +0000 (0:00:00.555) 0:04:56.625 ********* 2025-05-28 17:35:47.793387 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-28 17:35:47.793420 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-28 17:35:47.793430 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-28 17:35:47.793444 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-28 17:35:47.793452 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-28 17:35:47.793464 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-28 17:35:47.793472 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-28 17:35:47.793485 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-28 17:35:47.793493 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-28 17:35:47.793502 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-28 17:35:47.793514 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-28 17:35:47.793523 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-28 17:35:47.793535 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-28 17:35:47.793550 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-28 17:35:47.793559 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-28 17:35:47.793571 | orchestrator | 2025-05-28 17:35:47.793579 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-05-28 17:35:47.793587 | orchestrator | Wednesday 28 May 2025 17:31:51 +0000 (0:00:03.858) 0:05:00.483 ********* 2025-05-28 17:35:47.793595 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-28 17:35:47.793603 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-28 17:35:47.793616 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-28 17:35:47.793624 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-28 17:35:47.793637 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-28 17:35:47.793655 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-28 17:35:47.793663 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-28 17:35:47.793674 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-28 17:35:47.793683 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-28 17:35:47.793696 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-28 17:35:47.793704 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-28 17:35:47.793717 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-28 17:35:47.793725 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-28 17:35:47.793733 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-28 17:35:47.793864 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-28 17:35:47.793889 | orchestrator | 2025-05-28 17:35:47.793898 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-05-28 17:35:47.793906 | orchestrator | Wednesday 28 May 2025 17:31:58 +0000 (0:00:06.726) 0:05:07.210 ********* 2025-05-28 17:35:47.793914 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:35:47.793922 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:35:47.793935 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:35:47.793948 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:35:47.793960 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:35:47.793973 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:35:47.793986 | orchestrator | 2025-05-28 17:35:47.793994 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-05-28 17:35:47.794002 | orchestrator | Wednesday 28 May 2025 17:32:00 +0000 (0:00:01.738) 0:05:08.949 ********* 2025-05-28 17:35:47.794010 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-05-28 17:35:47.794060 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-05-28 17:35:47.794068 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-05-28 17:35:47.794083 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-05-28 17:35:47.794099 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-05-28 17:35:47.794106 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-05-28 17:35:47.794113 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-05-28 17:35:47.794120 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:35:47.794126 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-05-28 17:35:47.794133 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:35:47.794140 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-05-28 17:35:47.794146 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:35:47.794153 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-05-28 17:35:47.794160 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-05-28 17:35:47.794167 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-05-28 17:35:47.794173 | orchestrator | 2025-05-28 17:35:47.794180 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-05-28 17:35:47.794187 | orchestrator | Wednesday 28 May 2025 17:32:03 +0000 (0:00:03.661) 0:05:12.611 ********* 2025-05-28 17:35:47.794193 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:35:47.794200 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:35:47.794207 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:35:47.794213 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:35:47.794220 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:35:47.794226 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:35:47.794233 | orchestrator | 2025-05-28 17:35:47.794240 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-05-28 17:35:47.794246 | orchestrator | Wednesday 28 May 2025 17:32:04 +0000 (0:00:00.946) 0:05:13.558 ********* 2025-05-28 17:35:47.794253 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-05-28 17:35:47.794260 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-05-28 17:35:47.794266 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-05-28 17:35:47.794273 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-05-28 17:35:47.794279 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-05-28 17:35:47.794286 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-05-28 17:35:47.794292 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-05-28 17:35:47.794299 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-05-28 17:35:47.794305 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-05-28 17:35:47.794312 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-05-28 17:35:47.794319 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:35:47.794329 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-05-28 17:35:47.794336 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:35:47.794347 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-05-28 17:35:47.794353 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:35:47.794360 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-05-28 17:35:47.794367 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-05-28 17:35:47.794373 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-05-28 17:35:47.794380 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-05-28 17:35:47.794386 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-05-28 17:35:47.794393 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-05-28 17:35:47.794399 | orchestrator | 2025-05-28 17:35:47.794406 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-05-28 17:35:47.794413 | orchestrator | Wednesday 28 May 2025 17:32:10 +0000 (0:00:05.829) 0:05:19.387 ********* 2025-05-28 17:35:47.794419 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-05-28 17:35:47.794426 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-05-28 17:35:47.794436 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-05-28 17:35:47.794443 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-28 17:35:47.794450 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-05-28 17:35:47.794456 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-28 17:35:47.794463 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-05-28 17:35:47.794470 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-05-28 17:35:47.794476 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-28 17:35:47.794483 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-28 17:35:47.794489 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-28 17:35:47.794496 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-28 17:35:47.794503 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-28 17:35:47.794509 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-05-28 17:35:47.794516 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:35:47.794523 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-28 17:35:47.794529 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-05-28 17:35:47.794536 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:35:47.794543 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-05-28 17:35:47.794549 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:35:47.794556 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-28 17:35:47.794563 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-28 17:35:47.794569 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-28 17:35:47.794576 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-28 17:35:47.794589 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-28 17:35:47.794596 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-28 17:35:47.794603 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-28 17:35:47.794609 | orchestrator | 2025-05-28 17:35:47.794616 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-05-28 17:35:47.794622 | orchestrator | Wednesday 28 May 2025 17:32:17 +0000 (0:00:07.367) 0:05:26.755 ********* 2025-05-28 17:35:47.794629 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:35:47.794636 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:35:47.794642 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:35:47.794649 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:35:47.794656 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:35:47.794662 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:35:47.794669 | orchestrator | 2025-05-28 17:35:47.794675 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-05-28 17:35:47.794682 | orchestrator | Wednesday 28 May 2025 17:32:18 +0000 (0:00:00.508) 0:05:27.263 ********* 2025-05-28 17:35:47.794689 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:35:47.794698 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:35:47.794705 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:35:47.794712 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:35:47.794718 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:35:47.794725 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:35:47.794732 | orchestrator | 2025-05-28 17:35:47.794757 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-05-28 17:35:47.794765 | orchestrator | Wednesday 28 May 2025 17:32:19 +0000 (0:00:00.689) 0:05:27.953 ********* 2025-05-28 17:35:47.794772 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:35:47.794779 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:35:47.794785 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:35:47.794792 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:35:47.794798 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:35:47.794805 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:35:47.794811 | orchestrator | 2025-05-28 17:35:47.794818 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-05-28 17:35:47.794825 | orchestrator | Wednesday 28 May 2025 17:32:21 +0000 (0:00:01.906) 0:05:29.859 ********* 2025-05-28 17:35:47.794837 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-28 17:35:47.794845 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-28 17:35:47.794856 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-28 17:35:47.794863 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:35:47.794871 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-28 17:35:47.794882 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-28 17:35:47.794889 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-28 17:35:47.794896 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:35:47.794908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-28 17:35:47.794915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-28 17:35:47.794926 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:35:47.794933 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-28 17:35:47.794940 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-28 17:35:47.794951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-28 17:35:47.794958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-28 17:35:47.794970 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-28 17:35:47.794982 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:35:47.794988 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:35:47.794995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-28 17:35:47.795002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-28 17:35:47.795009 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:35:47.795016 | orchestrator | 2025-05-28 17:35:47.795023 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-05-28 17:35:47.795029 | orchestrator | Wednesday 28 May 2025 17:32:23 +0000 (0:00:01.998) 0:05:31.858 ********* 2025-05-28 17:35:47.795036 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-05-28 17:35:47.795043 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-05-28 17:35:47.795049 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:35:47.795056 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-05-28 17:35:47.795063 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-05-28 17:35:47.795069 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:35:47.795076 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-05-28 17:35:47.795082 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-05-28 17:35:47.795089 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:35:47.795095 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-05-28 17:35:47.795102 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-05-28 17:35:47.795108 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:35:47.795115 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-05-28 17:35:47.795121 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-05-28 17:35:47.795128 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:35:47.795138 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-05-28 17:35:47.795145 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-05-28 17:35:47.795151 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:35:47.795158 | orchestrator | 2025-05-28 17:35:47.795164 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-05-28 17:35:47.795171 | orchestrator | Wednesday 28 May 2025 17:32:23 +0000 (0:00:00.596) 0:05:32.455 ********* 2025-05-28 17:35:47.795178 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-28 17:35:47.795194 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-28 17:35:47.795201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-28 17:35:47.795208 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-28 17:35:47.795215 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-28 17:35:47.795226 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-28 17:35:47.795233 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-28 17:35:47.795248 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-28 17:35:47.795255 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-28 17:35:47.795262 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-28 17:35:47.795269 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-28 17:35:47.795279 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-28 17:35:47.795286 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-28 17:35:47.795301 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-28 17:35:47.795308 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-28 17:35:47.795315 | orchestrator | 2025-05-28 17:35:47.795322 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-28 17:35:47.795329 | orchestrator | Wednesday 28 May 2025 17:32:27 +0000 (0:00:03.782) 0:05:36.237 ********* 2025-05-28 17:35:47.795335 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:35:47.795342 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:35:47.795349 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:35:47.795355 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:35:47.795362 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:35:47.795368 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:35:47.795375 | orchestrator | 2025-05-28 17:35:47.795381 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-28 17:35:47.795388 | orchestrator | Wednesday 28 May 2025 17:32:28 +0000 (0:00:00.832) 0:05:37.069 ********* 2025-05-28 17:35:47.795395 | orchestrator | 2025-05-28 17:35:47.795401 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-28 17:35:47.795408 | orchestrator | Wednesday 28 May 2025 17:32:28 +0000 (0:00:00.356) 0:05:37.425 ********* 2025-05-28 17:35:47.795414 | orchestrator | 2025-05-28 17:35:47.795421 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-28 17:35:47.795427 | orchestrator | Wednesday 28 May 2025 17:32:28 +0000 (0:00:00.125) 0:05:37.551 ********* 2025-05-28 17:35:47.795434 | orchestrator | 2025-05-28 17:35:47.795441 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-28 17:35:47.795447 | orchestrator | Wednesday 28 May 2025 17:32:28 +0000 (0:00:00.133) 0:05:37.684 ********* 2025-05-28 17:35:47.795454 | orchestrator | 2025-05-28 17:35:47.795460 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-28 17:35:47.795467 | orchestrator | Wednesday 28 May 2025 17:32:28 +0000 (0:00:00.123) 0:05:37.808 ********* 2025-05-28 17:35:47.795473 | orchestrator | 2025-05-28 17:35:47.795480 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-28 17:35:47.795486 | orchestrator | Wednesday 28 May 2025 17:32:29 +0000 (0:00:00.124) 0:05:37.932 ********* 2025-05-28 17:35:47.795493 | orchestrator | 2025-05-28 17:35:47.795499 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-05-28 17:35:47.795510 | orchestrator | Wednesday 28 May 2025 17:32:29 +0000 (0:00:00.245) 0:05:38.178 ********* 2025-05-28 17:35:47.795516 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:35:47.795523 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:35:47.795529 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:35:47.795536 | orchestrator | 2025-05-28 17:35:47.795543 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-05-28 17:35:47.795553 | orchestrator | Wednesday 28 May 2025 17:32:41 +0000 (0:00:12.501) 0:05:50.680 ********* 2025-05-28 17:35:47.795559 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:35:47.795566 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:35:47.795573 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:35:47.795579 | orchestrator | 2025-05-28 17:35:47.795586 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-05-28 17:35:47.795592 | orchestrator | Wednesday 28 May 2025 17:32:59 +0000 (0:00:17.661) 0:06:08.341 ********* 2025-05-28 17:35:47.795599 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:35:47.795605 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:35:47.795612 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:35:47.795618 | orchestrator | 2025-05-28 17:35:47.795625 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-05-28 17:35:47.795631 | orchestrator | Wednesday 28 May 2025 17:33:24 +0000 (0:00:24.684) 0:06:33.025 ********* 2025-05-28 17:35:47.795638 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:35:47.795645 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:35:47.795651 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:35:47.795657 | orchestrator | 2025-05-28 17:35:47.795664 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-05-28 17:35:47.795671 | orchestrator | Wednesday 28 May 2025 17:34:04 +0000 (0:00:40.073) 0:07:13.099 ********* 2025-05-28 17:35:47.795677 | orchestrator | FAILED - RETRYING: [testbed-node-4]: Checking libvirt container is ready (10 retries left). 2025-05-28 17:35:47.795684 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:35:47.795690 | orchestrator | FAILED - RETRYING: [testbed-node-5]: Checking libvirt container is ready (10 retries left). 2025-05-28 17:35:47.795697 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:35:47.795703 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:35:47.795710 | orchestrator | 2025-05-28 17:35:47.795716 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-05-28 17:35:47.795723 | orchestrator | Wednesday 28 May 2025 17:34:10 +0000 (0:00:06.570) 0:07:19.669 ********* 2025-05-28 17:35:47.795733 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:35:47.795756 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:35:47.795763 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:35:47.795770 | orchestrator | 2025-05-28 17:35:47.795776 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-05-28 17:35:47.795783 | orchestrator | Wednesday 28 May 2025 17:34:11 +0000 (0:00:00.808) 0:07:20.477 ********* 2025-05-28 17:35:47.795789 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:35:47.795796 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:35:47.795802 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:35:47.795809 | orchestrator | 2025-05-28 17:35:47.795815 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-05-28 17:35:47.795822 | orchestrator | Wednesday 28 May 2025 17:34:39 +0000 (0:00:28.230) 0:07:48.708 ********* 2025-05-28 17:35:47.795828 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:35:47.795835 | orchestrator | 2025-05-28 17:35:47.795841 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-05-28 17:35:47.795848 | orchestrator | Wednesday 28 May 2025 17:34:40 +0000 (0:00:00.138) 0:07:48.847 ********* 2025-05-28 17:35:47.795854 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:35:47.795861 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:35:47.795868 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:35:47.795881 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:35:47.795888 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:35:47.795895 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-05-28 17:35:47.795901 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-05-28 17:35:47.795908 | orchestrator | 2025-05-28 17:35:47.795915 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-05-28 17:35:47.795921 | orchestrator | Wednesday 28 May 2025 17:35:01 +0000 (0:00:21.366) 0:08:10.213 ********* 2025-05-28 17:35:47.795928 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:35:47.795934 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:35:47.795941 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:35:47.795947 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:35:47.795954 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:35:47.795960 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:35:47.795967 | orchestrator | 2025-05-28 17:35:47.795974 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-05-28 17:35:47.795980 | orchestrator | Wednesday 28 May 2025 17:35:09 +0000 (0:00:08.401) 0:08:18.615 ********* 2025-05-28 17:35:47.795987 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:35:47.795993 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:35:47.796000 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:35:47.796006 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:35:47.796013 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:35:47.796019 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-4 2025-05-28 17:35:47.796026 | orchestrator | 2025-05-28 17:35:47.796033 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-05-28 17:35:47.796039 | orchestrator | Wednesday 28 May 2025 17:35:13 +0000 (0:00:03.913) 0:08:22.529 ********* 2025-05-28 17:35:47.796046 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-05-28 17:35:47.796052 | orchestrator | 2025-05-28 17:35:47.796059 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-05-28 17:35:47.796065 | orchestrator | Wednesday 28 May 2025 17:35:25 +0000 (0:00:11.296) 0:08:33.825 ********* 2025-05-28 17:35:47.796072 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-05-28 17:35:47.796079 | orchestrator | 2025-05-28 17:35:47.796085 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-05-28 17:35:47.796092 | orchestrator | Wednesday 28 May 2025 17:35:26 +0000 (0:00:01.238) 0:08:35.063 ********* 2025-05-28 17:35:47.796098 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:35:47.796105 | orchestrator | 2025-05-28 17:35:47.796111 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-05-28 17:35:47.796121 | orchestrator | Wednesday 28 May 2025 17:35:27 +0000 (0:00:01.275) 0:08:36.339 ********* 2025-05-28 17:35:47.796128 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-05-28 17:35:47.796134 | orchestrator | 2025-05-28 17:35:47.796141 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-05-28 17:35:47.796148 | orchestrator | Wednesday 28 May 2025 17:35:38 +0000 (0:00:10.859) 0:08:47.198 ********* 2025-05-28 17:35:47.796154 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:35:47.796161 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:35:47.796167 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:35:47.796174 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:35:47.796180 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:35:47.796187 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:35:47.796193 | orchestrator | 2025-05-28 17:35:47.796200 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-05-28 17:35:47.796207 | orchestrator | 2025-05-28 17:35:47.796213 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-05-28 17:35:47.796220 | orchestrator | Wednesday 28 May 2025 17:35:40 +0000 (0:00:01.698) 0:08:48.897 ********* 2025-05-28 17:35:47.796231 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:35:47.796238 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:35:47.796244 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:35:47.796251 | orchestrator | 2025-05-28 17:35:47.796257 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-05-28 17:35:47.796264 | orchestrator | 2025-05-28 17:35:47.796271 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-05-28 17:35:47.796277 | orchestrator | Wednesday 28 May 2025 17:35:41 +0000 (0:00:01.094) 0:08:49.992 ********* 2025-05-28 17:35:47.796284 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:35:47.796290 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:35:47.796297 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:35:47.796303 | orchestrator | 2025-05-28 17:35:47.796310 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-05-28 17:35:47.796316 | orchestrator | 2025-05-28 17:35:47.796327 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-05-28 17:35:47.796334 | orchestrator | Wednesday 28 May 2025 17:35:41 +0000 (0:00:00.488) 0:08:50.480 ********* 2025-05-28 17:35:47.796341 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-05-28 17:35:47.796347 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-05-28 17:35:47.796354 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-05-28 17:35:47.796361 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-05-28 17:35:47.796367 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-05-28 17:35:47.796374 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-05-28 17:35:47.796381 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-05-28 17:35:47.796387 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-05-28 17:35:47.796394 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-05-28 17:35:47.796401 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-05-28 17:35:47.796407 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-05-28 17:35:47.796414 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-05-28 17:35:47.796420 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:35:47.796427 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-05-28 17:35:47.796434 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-05-28 17:35:47.796440 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-05-28 17:35:47.796447 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-05-28 17:35:47.796453 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-05-28 17:35:47.796460 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-05-28 17:35:47.796467 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:35:47.796473 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-05-28 17:35:47.796480 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-05-28 17:35:47.796486 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-05-28 17:35:47.796493 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-05-28 17:35:47.796500 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-05-28 17:35:47.796506 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-05-28 17:35:47.796513 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:35:47.796519 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-05-28 17:35:47.796526 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-05-28 17:35:47.796533 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-05-28 17:35:47.796539 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-05-28 17:35:47.796546 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-05-28 17:35:47.796557 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-05-28 17:35:47.796563 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:35:47.796570 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:35:47.796577 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-05-28 17:35:47.796583 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-05-28 17:35:47.796590 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-05-28 17:35:47.796597 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-05-28 17:35:47.796603 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-05-28 17:35:47.796610 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-05-28 17:35:47.796617 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:35:47.796623 | orchestrator | 2025-05-28 17:35:47.796633 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-05-28 17:35:47.796640 | orchestrator | 2025-05-28 17:35:47.796646 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-05-28 17:35:47.796653 | orchestrator | Wednesday 28 May 2025 17:35:42 +0000 (0:00:01.286) 0:08:51.767 ********* 2025-05-28 17:35:47.796660 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-05-28 17:35:47.796666 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-05-28 17:35:47.796673 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:35:47.796679 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-05-28 17:35:47.796686 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-05-28 17:35:47.796692 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:35:47.796699 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-05-28 17:35:47.796705 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-05-28 17:35:47.796712 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:35:47.796718 | orchestrator | 2025-05-28 17:35:47.796725 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-05-28 17:35:47.796732 | orchestrator | 2025-05-28 17:35:47.796758 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-05-28 17:35:47.796766 | orchestrator | Wednesday 28 May 2025 17:35:43 +0000 (0:00:00.701) 0:08:52.468 ********* 2025-05-28 17:35:47.796773 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:35:47.796779 | orchestrator | 2025-05-28 17:35:47.796786 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-05-28 17:35:47.796792 | orchestrator | 2025-05-28 17:35:47.796799 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-05-28 17:35:47.796805 | orchestrator | Wednesday 28 May 2025 17:35:44 +0000 (0:00:00.735) 0:08:53.204 ********* 2025-05-28 17:35:47.796812 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:35:47.796819 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:35:47.796825 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:35:47.796832 | orchestrator | 2025-05-28 17:35:47.796843 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 17:35:47.796850 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 17:35:47.796857 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-05-28 17:35:47.796864 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-05-28 17:35:47.796871 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-05-28 17:35:47.796877 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-05-28 17:35:47.796888 | orchestrator | testbed-node-4 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2025-05-28 17:35:47.796895 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-05-28 17:35:47.796902 | orchestrator | 2025-05-28 17:35:47.796908 | orchestrator | 2025-05-28 17:35:47.796915 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 17:35:47.796921 | orchestrator | Wednesday 28 May 2025 17:35:44 +0000 (0:00:00.426) 0:08:53.630 ********* 2025-05-28 17:35:47.796928 | orchestrator | =============================================================================== 2025-05-28 17:35:47.796934 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 40.07s 2025-05-28 17:35:47.796941 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 29.51s 2025-05-28 17:35:47.796947 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 28.23s 2025-05-28 17:35:47.796954 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 24.68s 2025-05-28 17:35:47.796961 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 21.37s 2025-05-28 17:35:47.796967 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 20.58s 2025-05-28 17:35:47.796974 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 19.67s 2025-05-28 17:35:47.796980 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 17.66s 2025-05-28 17:35:47.796987 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 17.26s 2025-05-28 17:35:47.796993 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 13.39s 2025-05-28 17:35:47.797000 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 12.50s 2025-05-28 17:35:47.797006 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.30s 2025-05-28 17:35:47.797013 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.03s 2025-05-28 17:35:47.797019 | orchestrator | nova-cell : Create cell ------------------------------------------------ 10.86s 2025-05-28 17:35:47.797026 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 10.86s 2025-05-28 17:35:47.797032 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 10.84s 2025-05-28 17:35:47.797042 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 9.44s 2025-05-28 17:35:47.797049 | orchestrator | nova : Copying over nova.conf ------------------------------------------- 8.91s 2025-05-28 17:35:47.797055 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 8.40s 2025-05-28 17:35:47.797062 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 7.75s 2025-05-28 17:35:47.797068 | orchestrator | 2025-05-28 17:35:47 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:35:47.797075 | orchestrator | 2025-05-28 17:35:47 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:35:50.832061 | orchestrator | 2025-05-28 17:35:50 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:35:50.832192 | orchestrator | 2025-05-28 17:35:50 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:35:53.879827 | orchestrator | 2025-05-28 17:35:53 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:35:53.879951 | orchestrator | 2025-05-28 17:35:53 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:35:56.928025 | orchestrator | 2025-05-28 17:35:56 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:35:56.928349 | orchestrator | 2025-05-28 17:35:56 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:35:59.977579 | orchestrator | 2025-05-28 17:35:59 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:35:59.977712 | orchestrator | 2025-05-28 17:35:59 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:36:03.034113 | orchestrator | 2025-05-28 17:36:03 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:36:03.034285 | orchestrator | 2025-05-28 17:36:03 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:36:06.086408 | orchestrator | 2025-05-28 17:36:06 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:36:06.086554 | orchestrator | 2025-05-28 17:36:06 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:36:09.134711 | orchestrator | 2025-05-28 17:36:09 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:36:09.134896 | orchestrator | 2025-05-28 17:36:09 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:36:12.189686 | orchestrator | 2025-05-28 17:36:12 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:36:12.189946 | orchestrator | 2025-05-28 17:36:12 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:36:15.239282 | orchestrator | 2025-05-28 17:36:15 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:36:15.239411 | orchestrator | 2025-05-28 17:36:15 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:36:18.289137 | orchestrator | 2025-05-28 17:36:18 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:36:18.289242 | orchestrator | 2025-05-28 17:36:18 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:36:21.328792 | orchestrator | 2025-05-28 17:36:21 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:36:21.328922 | orchestrator | 2025-05-28 17:36:21 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:36:24.373046 | orchestrator | 2025-05-28 17:36:24 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:36:24.373163 | orchestrator | 2025-05-28 17:36:24 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:36:27.412259 | orchestrator | 2025-05-28 17:36:27 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:36:27.412394 | orchestrator | 2025-05-28 17:36:27 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:36:30.474512 | orchestrator | 2025-05-28 17:36:30 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:36:30.474616 | orchestrator | 2025-05-28 17:36:30 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:36:33.518670 | orchestrator | 2025-05-28 17:36:33 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:36:33.518833 | orchestrator | 2025-05-28 17:36:33 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:36:36.566770 | orchestrator | 2025-05-28 17:36:36 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:36:36.566906 | orchestrator | 2025-05-28 17:36:36 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:36:39.622127 | orchestrator | 2025-05-28 17:36:39 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:36:39.622319 | orchestrator | 2025-05-28 17:36:39 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:36:42.672127 | orchestrator | 2025-05-28 17:36:42 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:36:42.672264 | orchestrator | 2025-05-28 17:36:42 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:36:45.719219 | orchestrator | 2025-05-28 17:36:45 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:36:45.719322 | orchestrator | 2025-05-28 17:36:45 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:36:48.771193 | orchestrator | 2025-05-28 17:36:48 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:36:48.772095 | orchestrator | 2025-05-28 17:36:48 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:36:51.810827 | orchestrator | 2025-05-28 17:36:51 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:36:51.810963 | orchestrator | 2025-05-28 17:36:51 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:36:54.849409 | orchestrator | 2025-05-28 17:36:54 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:36:54.849564 | orchestrator | 2025-05-28 17:36:54 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:36:57.899852 | orchestrator | 2025-05-28 17:36:57 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:36:57.899983 | orchestrator | 2025-05-28 17:36:57 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:37:00.949144 | orchestrator | 2025-05-28 17:37:00 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:37:00.949296 | orchestrator | 2025-05-28 17:37:00 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:37:04.003566 | orchestrator | 2025-05-28 17:37:04 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:37:04.003674 | orchestrator | 2025-05-28 17:37:04 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:37:07.046983 | orchestrator | 2025-05-28 17:37:07 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:37:07.047981 | orchestrator | 2025-05-28 17:37:07 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:37:10.093191 | orchestrator | 2025-05-28 17:37:10 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:37:10.093317 | orchestrator | 2025-05-28 17:37:10 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:37:13.141852 | orchestrator | 2025-05-28 17:37:13 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:37:13.141965 | orchestrator | 2025-05-28 17:37:13 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:37:16.193597 | orchestrator | 2025-05-28 17:37:16 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:37:16.193769 | orchestrator | 2025-05-28 17:37:16 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:37:19.243312 | orchestrator | 2025-05-28 17:37:19 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:37:19.243455 | orchestrator | 2025-05-28 17:37:19 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:37:22.285850 | orchestrator | 2025-05-28 17:37:22 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:37:22.285954 | orchestrator | 2025-05-28 17:37:22 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:37:25.328567 | orchestrator | 2025-05-28 17:37:25 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:37:25.328721 | orchestrator | 2025-05-28 17:37:25 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:37:28.381910 | orchestrator | 2025-05-28 17:37:28 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:37:28.382106 | orchestrator | 2025-05-28 17:37:28 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:37:31.441191 | orchestrator | 2025-05-28 17:37:31 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:37:31.441331 | orchestrator | 2025-05-28 17:37:31 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:37:34.497637 | orchestrator | 2025-05-28 17:37:34 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:37:34.497822 | orchestrator | 2025-05-28 17:37:34 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:37:37.553429 | orchestrator | 2025-05-28 17:37:37 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:37:37.553584 | orchestrator | 2025-05-28 17:37:37 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:37:40.604884 | orchestrator | 2025-05-28 17:37:40 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:37:40.605003 | orchestrator | 2025-05-28 17:37:40 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:37:43.651677 | orchestrator | 2025-05-28 17:37:43 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:37:43.651786 | orchestrator | 2025-05-28 17:37:43 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:37:46.705895 | orchestrator | 2025-05-28 17:37:46 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:37:46.707174 | orchestrator | 2025-05-28 17:37:46 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:37:49.757367 | orchestrator | 2025-05-28 17:37:49 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:37:49.757502 | orchestrator | 2025-05-28 17:37:49 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:37:52.808094 | orchestrator | 2025-05-28 17:37:52 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:37:52.808233 | orchestrator | 2025-05-28 17:37:52 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:37:55.854842 | orchestrator | 2025-05-28 17:37:55 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:37:55.854978 | orchestrator | 2025-05-28 17:37:55 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:37:58.904786 | orchestrator | 2025-05-28 17:37:58 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:37:58.904921 | orchestrator | 2025-05-28 17:37:58 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:38:01.955956 | orchestrator | 2025-05-28 17:38:01 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:38:01.956062 | orchestrator | 2025-05-28 17:38:01 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:38:05.008783 | orchestrator | 2025-05-28 17:38:05 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:38:05.008853 | orchestrator | 2025-05-28 17:38:05 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:38:08.052896 | orchestrator | 2025-05-28 17:38:08 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:38:08.053033 | orchestrator | 2025-05-28 17:38:08 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:38:11.101823 | orchestrator | 2025-05-28 17:38:11 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:38:11.101936 | orchestrator | 2025-05-28 17:38:11 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:38:14.151699 | orchestrator | 2025-05-28 17:38:14 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:38:14.151841 | orchestrator | 2025-05-28 17:38:14 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:38:17.200377 | orchestrator | 2025-05-28 17:38:17 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:38:17.200486 | orchestrator | 2025-05-28 17:38:17 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:38:20.258182 | orchestrator | 2025-05-28 17:38:20 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:38:20.258278 | orchestrator | 2025-05-28 17:38:20 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:38:23.306712 | orchestrator | 2025-05-28 17:38:23 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:38:23.306834 | orchestrator | 2025-05-28 17:38:23 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:38:26.361074 | orchestrator | 2025-05-28 17:38:26 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:38:26.361233 | orchestrator | 2025-05-28 17:38:26 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:38:29.404862 | orchestrator | 2025-05-28 17:38:29 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state STARTED 2025-05-28 17:38:29.404995 | orchestrator | 2025-05-28 17:38:29 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:38:32.452446 | orchestrator | 2025-05-28 17:38:32.452888 | orchestrator | 2025-05-28 17:38:32.452915 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-28 17:38:32.452929 | orchestrator | 2025-05-28 17:38:32.452941 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-28 17:38:32.452953 | orchestrator | Wednesday 28 May 2025 17:33:45 +0000 (0:00:00.269) 0:00:00.269 ********* 2025-05-28 17:38:32.452965 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:38:32.452977 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:38:32.452988 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:38:32.453000 | orchestrator | 2025-05-28 17:38:32.453032 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-28 17:38:32.453044 | orchestrator | Wednesday 28 May 2025 17:33:45 +0000 (0:00:00.296) 0:00:00.566 ********* 2025-05-28 17:38:32.453055 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-05-28 17:38:32.453067 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-05-28 17:38:32.453078 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-05-28 17:38:32.453089 | orchestrator | 2025-05-28 17:38:32.453101 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-05-28 17:38:32.453112 | orchestrator | 2025-05-28 17:38:32.453123 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-05-28 17:38:32.453134 | orchestrator | Wednesday 28 May 2025 17:33:45 +0000 (0:00:00.429) 0:00:00.996 ********* 2025-05-28 17:38:32.453146 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:38:32.453159 | orchestrator | 2025-05-28 17:38:32.453170 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-05-28 17:38:32.453181 | orchestrator | Wednesday 28 May 2025 17:33:46 +0000 (0:00:00.572) 0:00:01.568 ********* 2025-05-28 17:38:32.453193 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-05-28 17:38:32.453204 | orchestrator | 2025-05-28 17:38:32.453215 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-05-28 17:38:32.453226 | orchestrator | Wednesday 28 May 2025 17:33:49 +0000 (0:00:03.304) 0:00:04.872 ********* 2025-05-28 17:38:32.453237 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-05-28 17:38:32.453249 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-05-28 17:38:32.453260 | orchestrator | 2025-05-28 17:38:32.453271 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-05-28 17:38:32.453308 | orchestrator | Wednesday 28 May 2025 17:33:56 +0000 (0:00:06.547) 0:00:11.420 ********* 2025-05-28 17:38:32.453321 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-28 17:38:32.453332 | orchestrator | 2025-05-28 17:38:32.453343 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-05-28 17:38:32.453354 | orchestrator | Wednesday 28 May 2025 17:33:59 +0000 (0:00:03.274) 0:00:14.694 ********* 2025-05-28 17:38:32.453365 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-28 17:38:32.453379 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-05-28 17:38:32.453397 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-05-28 17:38:32.453415 | orchestrator | 2025-05-28 17:38:32.453434 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-05-28 17:38:32.453452 | orchestrator | Wednesday 28 May 2025 17:34:07 +0000 (0:00:07.975) 0:00:22.670 ********* 2025-05-28 17:38:32.453467 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-28 17:38:32.453481 | orchestrator | 2025-05-28 17:38:32.453493 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-05-28 17:38:32.453509 | orchestrator | Wednesday 28 May 2025 17:34:10 +0000 (0:00:03.227) 0:00:25.897 ********* 2025-05-28 17:38:32.453528 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-05-28 17:38:32.453545 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-05-28 17:38:32.453566 | orchestrator | 2025-05-28 17:38:32.453587 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-05-28 17:38:32.453635 | orchestrator | Wednesday 28 May 2025 17:34:18 +0000 (0:00:07.440) 0:00:33.337 ********* 2025-05-28 17:38:32.453650 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-05-28 17:38:32.453662 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-05-28 17:38:32.453674 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-05-28 17:38:32.453686 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-05-28 17:38:32.453698 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-05-28 17:38:32.453710 | orchestrator | 2025-05-28 17:38:32.453723 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-05-28 17:38:32.453735 | orchestrator | Wednesday 28 May 2025 17:34:34 +0000 (0:00:15.838) 0:00:49.176 ********* 2025-05-28 17:38:32.453747 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:38:32.453759 | orchestrator | 2025-05-28 17:38:32.453772 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-05-28 17:38:32.453784 | orchestrator | Wednesday 28 May 2025 17:34:34 +0000 (0:00:00.547) 0:00:49.723 ********* 2025-05-28 17:38:32.453796 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:38:32.453808 | orchestrator | 2025-05-28 17:38:32.453820 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2025-05-28 17:38:32.453832 | orchestrator | Wednesday 28 May 2025 17:34:39 +0000 (0:00:05.103) 0:00:54.826 ********* 2025-05-28 17:38:32.453843 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:38:32.453854 | orchestrator | 2025-05-28 17:38:32.453864 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-05-28 17:38:32.454009 | orchestrator | Wednesday 28 May 2025 17:34:44 +0000 (0:00:04.689) 0:00:59.515 ********* 2025-05-28 17:38:32.454082 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:38:32.454095 | orchestrator | 2025-05-28 17:38:32.454106 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2025-05-28 17:38:32.454117 | orchestrator | Wednesday 28 May 2025 17:34:47 +0000 (0:00:03.202) 0:01:02.718 ********* 2025-05-28 17:38:32.454128 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-05-28 17:38:32.454139 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-05-28 17:38:32.454149 | orchestrator | 2025-05-28 17:38:32.454182 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2025-05-28 17:38:32.454194 | orchestrator | Wednesday 28 May 2025 17:34:58 +0000 (0:00:10.775) 0:01:13.493 ********* 2025-05-28 17:38:32.454205 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2025-05-28 17:38:32.454216 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2025-05-28 17:38:32.454230 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2025-05-28 17:38:32.454242 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2025-05-28 17:38:32.454253 | orchestrator | 2025-05-28 17:38:32.454264 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2025-05-28 17:38:32.454274 | orchestrator | Wednesday 28 May 2025 17:35:14 +0000 (0:00:15.646) 0:01:29.140 ********* 2025-05-28 17:38:32.454285 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:38:32.454296 | orchestrator | 2025-05-28 17:38:32.454306 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2025-05-28 17:38:32.454317 | orchestrator | Wednesday 28 May 2025 17:35:18 +0000 (0:00:04.609) 0:01:33.749 ********* 2025-05-28 17:38:32.454328 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:38:32.454381 | orchestrator | 2025-05-28 17:38:32.454393 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2025-05-28 17:38:32.454404 | orchestrator | Wednesday 28 May 2025 17:35:23 +0000 (0:00:05.261) 0:01:39.010 ********* 2025-05-28 17:38:32.454414 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:38:32.454425 | orchestrator | 2025-05-28 17:38:32.454436 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2025-05-28 17:38:32.454447 | orchestrator | Wednesday 28 May 2025 17:35:24 +0000 (0:00:00.187) 0:01:39.198 ********* 2025-05-28 17:38:32.454457 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:38:32.454468 | orchestrator | 2025-05-28 17:38:32.454479 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-05-28 17:38:32.454490 | orchestrator | Wednesday 28 May 2025 17:35:29 +0000 (0:00:05.189) 0:01:44.388 ********* 2025-05-28 17:38:32.454508 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:38:32.454526 | orchestrator | 2025-05-28 17:38:32.454543 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2025-05-28 17:38:32.454561 | orchestrator | Wednesday 28 May 2025 17:35:30 +0000 (0:00:01.139) 0:01:45.527 ********* 2025-05-28 17:38:32.454577 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:38:32.454589 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:38:32.454629 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:38:32.454642 | orchestrator | 2025-05-28 17:38:32.454654 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2025-05-28 17:38:32.454666 | orchestrator | Wednesday 28 May 2025 17:35:35 +0000 (0:00:05.219) 0:01:50.746 ********* 2025-05-28 17:38:32.454679 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:38:32.454691 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:38:32.454702 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:38:32.454714 | orchestrator | 2025-05-28 17:38:32.454726 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2025-05-28 17:38:32.454738 | orchestrator | Wednesday 28 May 2025 17:35:39 +0000 (0:00:03.976) 0:01:54.723 ********* 2025-05-28 17:38:32.454751 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:38:32.454763 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:38:32.454774 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:38:32.454786 | orchestrator | 2025-05-28 17:38:32.454798 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2025-05-28 17:38:32.454811 | orchestrator | Wednesday 28 May 2025 17:35:40 +0000 (0:00:00.783) 0:01:55.506 ********* 2025-05-28 17:38:32.454833 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:38:32.454845 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:38:32.454857 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:38:32.454869 | orchestrator | 2025-05-28 17:38:32.454881 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2025-05-28 17:38:32.454893 | orchestrator | Wednesday 28 May 2025 17:35:42 +0000 (0:00:01.958) 0:01:57.465 ********* 2025-05-28 17:38:32.454905 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:38:32.454917 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:38:32.454930 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:38:32.454942 | orchestrator | 2025-05-28 17:38:32.454953 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2025-05-28 17:38:32.454964 | orchestrator | Wednesday 28 May 2025 17:35:43 +0000 (0:00:01.232) 0:01:58.697 ********* 2025-05-28 17:38:32.454974 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:38:32.454985 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:38:32.454995 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:38:32.455005 | orchestrator | 2025-05-28 17:38:32.455016 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2025-05-28 17:38:32.455027 | orchestrator | Wednesday 28 May 2025 17:35:44 +0000 (0:00:01.151) 0:01:59.848 ********* 2025-05-28 17:38:32.455037 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:38:32.455048 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:38:32.455059 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:38:32.455069 | orchestrator | 2025-05-28 17:38:32.455129 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2025-05-28 17:38:32.455142 | orchestrator | Wednesday 28 May 2025 17:35:46 +0000 (0:00:02.009) 0:02:01.858 ********* 2025-05-28 17:38:32.455153 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:38:32.455164 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:38:32.455174 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:38:32.455185 | orchestrator | 2025-05-28 17:38:32.455195 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2025-05-28 17:38:32.455213 | orchestrator | Wednesday 28 May 2025 17:35:48 +0000 (0:00:01.780) 0:02:03.639 ********* 2025-05-28 17:38:32.455224 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:38:32.455234 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:38:32.455245 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:38:32.455256 | orchestrator | 2025-05-28 17:38:32.455266 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2025-05-28 17:38:32.455277 | orchestrator | Wednesday 28 May 2025 17:35:49 +0000 (0:00:00.585) 0:02:04.224 ********* 2025-05-28 17:38:32.455287 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:38:32.455298 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:38:32.455309 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:38:32.455319 | orchestrator | 2025-05-28 17:38:32.455330 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-05-28 17:38:32.455340 | orchestrator | Wednesday 28 May 2025 17:35:52 +0000 (0:00:02.950) 0:02:07.175 ********* 2025-05-28 17:38:32.455351 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:38:32.455362 | orchestrator | 2025-05-28 17:38:32.455372 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2025-05-28 17:38:32.455383 | orchestrator | Wednesday 28 May 2025 17:35:52 +0000 (0:00:00.703) 0:02:07.878 ********* 2025-05-28 17:38:32.455394 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:38:32.455404 | orchestrator | 2025-05-28 17:38:32.455415 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-05-28 17:38:32.455426 | orchestrator | Wednesday 28 May 2025 17:35:56 +0000 (0:00:03.850) 0:02:11.729 ********* 2025-05-28 17:38:32.455436 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:38:32.455447 | orchestrator | 2025-05-28 17:38:32.455458 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2025-05-28 17:38:32.455468 | orchestrator | Wednesday 28 May 2025 17:35:59 +0000 (0:00:03.050) 0:02:14.780 ********* 2025-05-28 17:38:32.455485 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-05-28 17:38:32.455496 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-05-28 17:38:32.455507 | orchestrator | 2025-05-28 17:38:32.455517 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2025-05-28 17:38:32.455528 | orchestrator | Wednesday 28 May 2025 17:36:06 +0000 (0:00:07.016) 0:02:21.796 ********* 2025-05-28 17:38:32.455539 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:38:32.455549 | orchestrator | 2025-05-28 17:38:32.455560 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2025-05-28 17:38:32.455570 | orchestrator | Wednesday 28 May 2025 17:36:09 +0000 (0:00:03.307) 0:02:25.104 ********* 2025-05-28 17:38:32.455581 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:38:32.455615 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:38:32.455628 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:38:32.455638 | orchestrator | 2025-05-28 17:38:32.455649 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2025-05-28 17:38:32.455660 | orchestrator | Wednesday 28 May 2025 17:36:10 +0000 (0:00:00.321) 0:02:25.425 ********* 2025-05-28 17:38:32.455675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-28 17:38:32.455724 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-28 17:38:32.455745 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-28 17:38:32.455765 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-28 17:38:32.455778 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-28 17:38:32.455789 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-28 17:38:32.455801 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-28 17:38:32.455815 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-28 17:38:32.455857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-28 17:38:32.455877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-28 17:38:32.455897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-28 17:38:32.455908 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-28 17:38:32.455920 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-28 17:38:32.455936 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-28 17:38:32.455948 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-28 17:38:32.455959 | orchestrator | 2025-05-28 17:38:32.455971 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2025-05-28 17:38:32.455982 | orchestrator | Wednesday 28 May 2025 17:36:12 +0000 (0:00:02.580) 0:02:28.006 ********* 2025-05-28 17:38:32.455993 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:38:32.456004 | orchestrator | 2025-05-28 17:38:32.456045 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2025-05-28 17:38:32.456058 | orchestrator | Wednesday 28 May 2025 17:36:13 +0000 (0:00:00.325) 0:02:28.332 ********* 2025-05-28 17:38:32.456069 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:38:32.456080 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:38:32.456091 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:38:32.456101 | orchestrator | 2025-05-28 17:38:32.456117 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2025-05-28 17:38:32.456136 | orchestrator | Wednesday 28 May 2025 17:36:13 +0000 (0:00:00.295) 0:02:28.628 ********* 2025-05-28 17:38:32.456148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-28 17:38:32.456160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-28 17:38:32.456172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-28 17:38:32.456184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-28 17:38:32.456195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-28 17:38:32.456206 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:38:32.456253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-28 17:38:32.456281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-28 17:38:32.456292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-28 17:38:32.456303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-28 17:38:32.456315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-28 17:38:32.456326 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:38:32.456337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-28 17:38:32.456382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-28 17:38:32.456407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-28 17:38:32.456419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-28 17:38:32.456430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-28 17:38:32.456441 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:38:32.456453 | orchestrator | 2025-05-28 17:38:32.456464 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-05-28 17:38:32.456475 | orchestrator | Wednesday 28 May 2025 17:36:14 +0000 (0:00:00.662) 0:02:29.290 ********* 2025-05-28 17:38:32.456486 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:38:32.456497 | orchestrator | 2025-05-28 17:38:32.456508 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2025-05-28 17:38:32.456518 | orchestrator | Wednesday 28 May 2025 17:36:14 +0000 (0:00:00.515) 0:02:29.806 ********* 2025-05-28 17:38:32.456530 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-28 17:38:32.456572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 2025-05-28 17:38:32 | INFO  | Task 8ab5193b-c2ba-4252-ac9f-ee2dda347044 is in state SUCCESS 2025-05-28 17:38:32.456637 | orchestrator | 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-28 17:38:32.456654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-28 17:38:32.456666 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-28 17:38:32.456678 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-28 17:38:32.456689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-28 17:38:32.456700 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-28 17:38:32.456753 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-28 17:38:32.456772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-28 17:38:32.456783 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-28 17:38:32.456794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-28 17:38:32.456805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-28 17:38:32.456816 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-28 17:38:32.456828 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-28 17:38:32.456853 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-28 17:38:32.456864 | orchestrator | 2025-05-28 17:38:32.456876 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2025-05-28 17:38:32.456887 | orchestrator | Wednesday 28 May 2025 17:36:19 +0000 (0:00:05.016) 0:02:34.822 ********* 2025-05-28 17:38:32.456903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-28 17:38:32.456915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-28 17:38:32.456926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-28 17:38:32.456937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-28 17:38:32.456955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-28 17:38:32.456966 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:38:32.456994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-28 17:38:32.457006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-28 17:38:32.457017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-28 17:38:32.457028 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-28 17:38:32.457040 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-28 17:38:32.457051 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:38:32.457067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-28 17:38:32.457084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-28 17:38:32.457100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-28 17:38:32.457112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-28 17:38:32.457123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-28 17:38:32.457134 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:38:32.457145 | orchestrator | 2025-05-28 17:38:32.457156 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2025-05-28 17:38:32.457167 | orchestrator | Wednesday 28 May 2025 17:36:20 +0000 (0:00:00.678) 0:02:35.501 ********* 2025-05-28 17:38:32.457178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-28 17:38:32.457196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-28 17:38:32.457213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-28 17:38:32.457229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-28 17:38:32.457241 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-28 17:38:32.457252 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:38:32.457263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-28 17:38:32.457285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-28 17:38:32.457298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-28 17:38:32.457317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-28 17:38:32.457351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-28 17:38:32.457371 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:38:32.457389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-28 17:38:32.457407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-28 17:38:32.457426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-28 17:38:32.457457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-28 17:38:32.457477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-28 17:38:32.457489 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:38:32.457500 | orchestrator | 2025-05-28 17:38:32.457510 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2025-05-28 17:38:32.457521 | orchestrator | Wednesday 28 May 2025 17:36:21 +0000 (0:00:00.866) 0:02:36.367 ********* 2025-05-28 17:38:32.457548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-28 17:38:32.457560 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-28 17:38:32.457572 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-28 17:38:32.457590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-28 17:38:32.457754 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-28 17:38:32.457787 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-28 17:38:32.457808 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-28 17:38:32.457821 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-28 17:38:32.457832 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-28 17:38:32.457855 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-28 17:38:32.457866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-28 17:38:32.457878 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-28 17:38:32.457897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-28 17:38:32.457914 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-28 17:38:32.457926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-28 17:38:32.457945 | orchestrator | 2025-05-28 17:38:32.457958 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2025-05-28 17:38:32.457969 | orchestrator | Wednesday 28 May 2025 17:36:26 +0000 (0:00:05.029) 0:02:41.396 ********* 2025-05-28 17:38:32.457980 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-05-28 17:38:32.457992 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-05-28 17:38:32.458002 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-05-28 17:38:32.458013 | orchestrator | 2025-05-28 17:38:32.458076 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2025-05-28 17:38:32.458088 | orchestrator | Wednesday 28 May 2025 17:36:27 +0000 (0:00:01.549) 0:02:42.946 ********* 2025-05-28 17:38:32.458099 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-28 17:38:32.458111 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-28 17:38:32.458137 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-28 17:38:32.458149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-28 17:38:32.458229 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-28 17:38:32.458242 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-28 17:38:32.458252 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-28 17:38:32.458263 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-28 17:38:32.458279 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-28 17:38:32.458295 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-28 17:38:32.458305 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-28 17:38:32.458323 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-28 17:38:32.458333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-28 17:38:32.458343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-28 17:38:32.458353 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-28 17:38:32.458363 | orchestrator | 2025-05-28 17:38:32.458373 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2025-05-28 17:38:32.458383 | orchestrator | Wednesday 28 May 2025 17:36:43 +0000 (0:00:15.995) 0:02:58.941 ********* 2025-05-28 17:38:32.458392 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:38:32.458402 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:38:32.458412 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:38:32.458421 | orchestrator | 2025-05-28 17:38:32.458431 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2025-05-28 17:38:32.458441 | orchestrator | Wednesday 28 May 2025 17:36:45 +0000 (0:00:01.447) 0:03:00.389 ********* 2025-05-28 17:38:32.458455 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-05-28 17:38:32.458466 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-05-28 17:38:32.458475 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-05-28 17:38:32.458485 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-05-28 17:38:32.458494 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-05-28 17:38:32.458508 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-05-28 17:38:32.458524 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-05-28 17:38:32.458534 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-05-28 17:38:32.458543 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-05-28 17:38:32.458553 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-05-28 17:38:32.458562 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-05-28 17:38:32.458572 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-05-28 17:38:32.458581 | orchestrator | 2025-05-28 17:38:32.458591 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2025-05-28 17:38:32.458634 | orchestrator | Wednesday 28 May 2025 17:36:50 +0000 (0:00:05.330) 0:03:05.720 ********* 2025-05-28 17:38:32.458650 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-05-28 17:38:32.458666 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-05-28 17:38:32.458681 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-05-28 17:38:32.458695 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-05-28 17:38:32.458705 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-05-28 17:38:32.458715 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-05-28 17:38:32.458724 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-05-28 17:38:32.458734 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-05-28 17:38:32.458744 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-05-28 17:38:32.458753 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-05-28 17:38:32.458763 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-05-28 17:38:32.458772 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-05-28 17:38:32.458781 | orchestrator | 2025-05-28 17:38:32.458791 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2025-05-28 17:38:32.458801 | orchestrator | Wednesday 28 May 2025 17:36:55 +0000 (0:00:04.925) 0:03:10.646 ********* 2025-05-28 17:38:32.458810 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-05-28 17:38:32.458820 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-05-28 17:38:32.458829 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-05-28 17:38:32.458839 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-05-28 17:38:32.458848 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-05-28 17:38:32.458858 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-05-28 17:38:32.458867 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-05-28 17:38:32.458877 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-05-28 17:38:32.458887 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-05-28 17:38:32.458898 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-05-28 17:38:32.458909 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-05-28 17:38:32.458919 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-05-28 17:38:32.458930 | orchestrator | 2025-05-28 17:38:32.458940 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2025-05-28 17:38:32.458951 | orchestrator | Wednesday 28 May 2025 17:37:00 +0000 (0:00:05.021) 0:03:15.667 ********* 2025-05-28 17:38:32.458963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-28 17:38:32.459000 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-28 17:38:32.459012 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-28 17:38:32.459024 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-28 17:38:32.459036 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-28 17:38:32.459047 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-28 17:38:32.459059 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-28 17:38:32.459085 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-28 17:38:32.459102 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-28 17:38:32.459113 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-28 17:38:32.459125 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-28 17:38:32.459136 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-28 17:38:32.459148 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-28 17:38:32.459180 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-28 17:38:32.459199 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-28 17:38:32.459211 | orchestrator | 2025-05-28 17:38:32.459229 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-05-28 17:38:32.459240 | orchestrator | Wednesday 28 May 2025 17:37:04 +0000 (0:00:03.493) 0:03:19.160 ********* 2025-05-28 17:38:32.459252 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:38:32.459263 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:38:32.459274 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:38:32.459284 | orchestrator | 2025-05-28 17:38:32.459295 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2025-05-28 17:38:32.459306 | orchestrator | Wednesday 28 May 2025 17:37:04 +0000 (0:00:00.279) 0:03:19.440 ********* 2025-05-28 17:38:32.459316 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:38:32.459327 | orchestrator | 2025-05-28 17:38:32.459338 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2025-05-28 17:38:32.459349 | orchestrator | Wednesday 28 May 2025 17:37:06 +0000 (0:00:02.412) 0:03:21.852 ********* 2025-05-28 17:38:32.459359 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:38:32.459370 | orchestrator | 2025-05-28 17:38:32.459381 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2025-05-28 17:38:32.459392 | orchestrator | Wednesday 28 May 2025 17:37:08 +0000 (0:00:01.931) 0:03:23.784 ********* 2025-05-28 17:38:32.459403 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:38:32.459413 | orchestrator | 2025-05-28 17:38:32.459424 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2025-05-28 17:38:32.459436 | orchestrator | Wednesday 28 May 2025 17:37:10 +0000 (0:00:02.093) 0:03:25.877 ********* 2025-05-28 17:38:32.459447 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:38:32.459457 | orchestrator | 2025-05-28 17:38:32.459468 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2025-05-28 17:38:32.459479 | orchestrator | Wednesday 28 May 2025 17:37:12 +0000 (0:00:02.005) 0:03:27.883 ********* 2025-05-28 17:38:32.459489 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:38:32.459500 | orchestrator | 2025-05-28 17:38:32.459511 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-05-28 17:38:32.459522 | orchestrator | Wednesday 28 May 2025 17:37:32 +0000 (0:00:19.494) 0:03:47.378 ********* 2025-05-28 17:38:32.459532 | orchestrator | 2025-05-28 17:38:32.459543 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-05-28 17:38:32.459554 | orchestrator | Wednesday 28 May 2025 17:37:32 +0000 (0:00:00.065) 0:03:47.443 ********* 2025-05-28 17:38:32.459572 | orchestrator | 2025-05-28 17:38:32.459583 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-05-28 17:38:32.459633 | orchestrator | Wednesday 28 May 2025 17:37:32 +0000 (0:00:00.066) 0:03:47.509 ********* 2025-05-28 17:38:32.459647 | orchestrator | 2025-05-28 17:38:32.459658 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2025-05-28 17:38:32.459669 | orchestrator | Wednesday 28 May 2025 17:37:32 +0000 (0:00:00.063) 0:03:47.573 ********* 2025-05-28 17:38:32.459680 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:38:32.459691 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:38:32.459701 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:38:32.459712 | orchestrator | 2025-05-28 17:38:32.459722 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2025-05-28 17:38:32.459733 | orchestrator | Wednesday 28 May 2025 17:37:49 +0000 (0:00:16.637) 0:04:04.211 ********* 2025-05-28 17:38:32.459744 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:38:32.459755 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:38:32.459765 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:38:32.459776 | orchestrator | 2025-05-28 17:38:32.459786 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2025-05-28 17:38:32.459797 | orchestrator | Wednesday 28 May 2025 17:38:00 +0000 (0:00:11.334) 0:04:15.546 ********* 2025-05-28 17:38:32.459808 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:38:32.459819 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:38:32.459829 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:38:32.459840 | orchestrator | 2025-05-28 17:38:32.459851 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2025-05-28 17:38:32.459861 | orchestrator | Wednesday 28 May 2025 17:38:10 +0000 (0:00:10.321) 0:04:25.867 ********* 2025-05-28 17:38:32.459872 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:38:32.459883 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:38:32.459893 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:38:32.459904 | orchestrator | 2025-05-28 17:38:32.459914 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2025-05-28 17:38:32.459925 | orchestrator | Wednesday 28 May 2025 17:38:18 +0000 (0:00:08.152) 0:04:34.020 ********* 2025-05-28 17:38:32.459936 | orchestrator | changed: [testbed-node-1] 2025-05-28 17:38:32.459946 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:38:32.459957 | orchestrator | changed: [testbed-node-2] 2025-05-28 17:38:32.459968 | orchestrator | 2025-05-28 17:38:32.459978 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 17:38:32.459990 | orchestrator | testbed-node-0 : ok=57  changed=39  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-28 17:38:32.460002 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-28 17:38:32.460013 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-28 17:38:32.460024 | orchestrator | 2025-05-28 17:38:32.460035 | orchestrator | 2025-05-28 17:38:32.460052 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 17:38:32.460063 | orchestrator | Wednesday 28 May 2025 17:38:29 +0000 (0:00:10.639) 0:04:44.659 ********* 2025-05-28 17:38:32.460074 | orchestrator | =============================================================================== 2025-05-28 17:38:32.460085 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 19.49s 2025-05-28 17:38:32.460096 | orchestrator | octavia : Restart octavia-api container -------------------------------- 16.64s 2025-05-28 17:38:32.460112 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 16.00s 2025-05-28 17:38:32.460124 | orchestrator | octavia : Adding octavia related roles --------------------------------- 15.84s 2025-05-28 17:38:32.460134 | orchestrator | octavia : Add rules for security groups -------------------------------- 15.65s 2025-05-28 17:38:32.460152 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 11.33s 2025-05-28 17:38:32.460163 | orchestrator | octavia : Create security groups for octavia --------------------------- 10.78s 2025-05-28 17:38:32.460173 | orchestrator | octavia : Restart octavia-worker container ----------------------------- 10.64s 2025-05-28 17:38:32.460184 | orchestrator | octavia : Restart octavia-health-manager container --------------------- 10.32s 2025-05-28 17:38:32.460194 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 8.15s 2025-05-28 17:38:32.460205 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 7.98s 2025-05-28 17:38:32.460216 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.44s 2025-05-28 17:38:32.460226 | orchestrator | octavia : Get security groups for octavia ------------------------------- 7.02s 2025-05-28 17:38:32.460237 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.55s 2025-05-28 17:38:32.460248 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.33s 2025-05-28 17:38:32.460258 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.26s 2025-05-28 17:38:32.460269 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.22s 2025-05-28 17:38:32.460280 | orchestrator | octavia : Update loadbalancer management subnet ------------------------- 5.19s 2025-05-28 17:38:32.460290 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 5.10s 2025-05-28 17:38:32.460301 | orchestrator | octavia : Copying over config.json files for services ------------------- 5.03s 2025-05-28 17:38:32.460312 | orchestrator | 2025-05-28 17:38:32 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-28 17:38:35.500943 | orchestrator | 2025-05-28 17:38:35 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-28 17:38:38.548050 | orchestrator | 2025-05-28 17:38:38 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-28 17:38:41.591417 | orchestrator | 2025-05-28 17:38:41 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-28 17:38:44.638446 | orchestrator | 2025-05-28 17:38:44 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-28 17:38:47.693889 | orchestrator | 2025-05-28 17:38:47 | INFO  | Task 69be0997-1372-4772-b3e3-a546d354caad is in state STARTED 2025-05-28 17:38:47.694166 | orchestrator | 2025-05-28 17:38:47 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:38:50.763029 | orchestrator | 2025-05-28 17:38:50 | INFO  | Task 69be0997-1372-4772-b3e3-a546d354caad is in state STARTED 2025-05-28 17:38:50.763152 | orchestrator | 2025-05-28 17:38:50 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:38:53.819056 | orchestrator | 2025-05-28 17:38:53 | INFO  | Task 69be0997-1372-4772-b3e3-a546d354caad is in state STARTED 2025-05-28 17:38:53.819184 | orchestrator | 2025-05-28 17:38:53 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:38:56.878372 | orchestrator | 2025-05-28 17:38:56 | INFO  | Task 69be0997-1372-4772-b3e3-a546d354caad is in state STARTED 2025-05-28 17:38:56.878517 | orchestrator | 2025-05-28 17:38:56 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:38:59.936265 | orchestrator | 2025-05-28 17:38:59 | INFO  | Task 69be0997-1372-4772-b3e3-a546d354caad is in state STARTED 2025-05-28 17:38:59.936399 | orchestrator | 2025-05-28 17:38:59 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:39:02.995553 | orchestrator | 2025-05-28 17:39:02 | INFO  | Task 69be0997-1372-4772-b3e3-a546d354caad is in state STARTED 2025-05-28 17:39:02.996154 | orchestrator | 2025-05-28 17:39:02 | INFO  | Wait 1 second(s) until the next check 2025-05-28 17:39:06.045559 | orchestrator | 2025-05-28 17:39:06 | INFO  | Task 69be0997-1372-4772-b3e3-a546d354caad is in state SUCCESS 2025-05-28 17:39:06.045789 | orchestrator | 2025-05-28 17:39:06 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-28 17:39:09.085625 | orchestrator | 2025-05-28 17:39:09 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-28 17:39:12.130368 | orchestrator | 2025-05-28 17:39:12 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-28 17:39:15.171466 | orchestrator | 2025-05-28 17:39:15 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-28 17:39:18.216950 | orchestrator | 2025-05-28 17:39:18 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-28 17:39:21.262936 | orchestrator | 2025-05-28 17:39:21 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-28 17:39:24.304708 | orchestrator | 2025-05-28 17:39:24 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-28 17:39:27.345000 | orchestrator | 2025-05-28 17:39:27 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-28 17:39:30.399914 | orchestrator | 2025-05-28 17:39:30 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-28 17:39:33.440664 | orchestrator | 2025-05-28 17:39:33 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-28 17:39:36.486323 | orchestrator | 2025-05-28 17:39:36 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-28 17:39:39.532075 | orchestrator | 2025-05-28 17:39:39 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-28 17:39:42.576376 | orchestrator | 2025-05-28 17:39:42 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-28 17:39:45.625053 | orchestrator | 2025-05-28 17:39:45 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-28 17:39:48.669056 | orchestrator | 2025-05-28 17:39:48 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-28 17:39:51.713976 | orchestrator | 2025-05-28 17:39:51.714209 | orchestrator | None 2025-05-28 17:39:51.959358 | orchestrator | 2025-05-28 17:39:51.964424 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Wed May 28 17:39:51 UTC 2025 2025-05-28 17:39:51.964471 | orchestrator | 2025-05-28 17:39:52.341673 | orchestrator | ok: Runtime: 0:34:21.219921 2025-05-28 17:39:52.606230 | 2025-05-28 17:39:52.606413 | TASK [Bootstrap services] 2025-05-28 17:39:53.396409 | orchestrator | 2025-05-28 17:39:53.396677 | orchestrator | # BOOTSTRAP 2025-05-28 17:39:53.396704 | orchestrator | 2025-05-28 17:39:53.396719 | orchestrator | + set -e 2025-05-28 17:39:53.396732 | orchestrator | + echo 2025-05-28 17:39:53.396746 | orchestrator | + echo '# BOOTSTRAP' 2025-05-28 17:39:53.396764 | orchestrator | + echo 2025-05-28 17:39:53.396814 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-05-28 17:39:53.406061 | orchestrator | + set -e 2025-05-28 17:39:53.406095 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-05-28 17:39:55.406854 | orchestrator | 2025-05-28 17:39:55 | INFO  | It takes a moment until task 0df13e13-6c41-49f2-8554-b45994b9c8f6 (flavor-manager) has been started and output is visible here. 2025-05-28 17:39:59.837841 | orchestrator | 2025-05-28 17:39:59 | INFO  | Flavor SCS-1V-4 created 2025-05-28 17:40:00.001704 | orchestrator | 2025-05-28 17:39:59 | INFO  | Flavor SCS-2V-8 created 2025-05-28 17:40:00.315540 | orchestrator | 2025-05-28 17:40:00 | INFO  | Flavor SCS-4V-16 created 2025-05-28 17:40:00.448599 | orchestrator | 2025-05-28 17:40:00 | INFO  | Flavor SCS-8V-32 created 2025-05-28 17:40:00.574315 | orchestrator | 2025-05-28 17:40:00 | INFO  | Flavor SCS-1V-2 created 2025-05-28 17:40:00.730363 | orchestrator | 2025-05-28 17:40:00 | INFO  | Flavor SCS-2V-4 created 2025-05-28 17:40:00.866701 | orchestrator | 2025-05-28 17:40:00 | INFO  | Flavor SCS-4V-8 created 2025-05-28 17:40:01.001706 | orchestrator | 2025-05-28 17:40:00 | INFO  | Flavor SCS-8V-16 created 2025-05-28 17:40:01.135667 | orchestrator | 2025-05-28 17:40:01 | INFO  | Flavor SCS-16V-32 created 2025-05-28 17:40:01.255591 | orchestrator | 2025-05-28 17:40:01 | INFO  | Flavor SCS-1V-8 created 2025-05-28 17:40:01.375615 | orchestrator | 2025-05-28 17:40:01 | INFO  | Flavor SCS-2V-16 created 2025-05-28 17:40:01.505250 | orchestrator | 2025-05-28 17:40:01 | INFO  | Flavor SCS-4V-32 created 2025-05-28 17:40:01.643045 | orchestrator | 2025-05-28 17:40:01 | INFO  | Flavor SCS-1L-1 created 2025-05-28 17:40:01.760303 | orchestrator | 2025-05-28 17:40:01 | INFO  | Flavor SCS-2V-4-20s created 2025-05-28 17:40:01.895372 | orchestrator | 2025-05-28 17:40:01 | INFO  | Flavor SCS-4V-16-100s created 2025-05-28 17:40:02.049235 | orchestrator | 2025-05-28 17:40:02 | INFO  | Flavor SCS-1V-4-10 created 2025-05-28 17:40:02.190783 | orchestrator | 2025-05-28 17:40:02 | INFO  | Flavor SCS-2V-8-20 created 2025-05-28 17:40:02.322448 | orchestrator | 2025-05-28 17:40:02 | INFO  | Flavor SCS-4V-16-50 created 2025-05-28 17:40:02.472255 | orchestrator | 2025-05-28 17:40:02 | INFO  | Flavor SCS-8V-32-100 created 2025-05-28 17:40:02.615820 | orchestrator | 2025-05-28 17:40:02 | INFO  | Flavor SCS-1V-2-5 created 2025-05-28 17:40:02.746909 | orchestrator | 2025-05-28 17:40:02 | INFO  | Flavor SCS-2V-4-10 created 2025-05-28 17:40:02.869060 | orchestrator | 2025-05-28 17:40:02 | INFO  | Flavor SCS-4V-8-20 created 2025-05-28 17:40:02.984497 | orchestrator | 2025-05-28 17:40:02 | INFO  | Flavor SCS-8V-16-50 created 2025-05-28 17:40:03.091885 | orchestrator | 2025-05-28 17:40:03 | INFO  | Flavor SCS-16V-32-100 created 2025-05-28 17:40:03.210923 | orchestrator | 2025-05-28 17:40:03 | INFO  | Flavor SCS-1V-8-20 created 2025-05-28 17:40:03.330818 | orchestrator | 2025-05-28 17:40:03 | INFO  | Flavor SCS-2V-16-50 created 2025-05-28 17:40:03.462105 | orchestrator | 2025-05-28 17:40:03 | INFO  | Flavor SCS-4V-32-100 created 2025-05-28 17:40:03.597655 | orchestrator | 2025-05-28 17:40:03 | INFO  | Flavor SCS-1L-1-5 created 2025-05-28 17:40:05.743173 | orchestrator | 2025-05-28 17:40:05 | INFO  | Trying to run play bootstrap-basic in environment openstack 2025-05-28 17:40:05.748840 | orchestrator | Registering Redlock._acquired_script 2025-05-28 17:40:05.748895 | orchestrator | Registering Redlock._extend_script 2025-05-28 17:40:05.748943 | orchestrator | Registering Redlock._release_script 2025-05-28 17:40:05.805259 | orchestrator | 2025-05-28 17:40:05 | INFO  | Task 53d31a92-0e3d-4b7c-b7ea-a0ba706fc80e (bootstrap-basic) was prepared for execution. 2025-05-28 17:40:05.805335 | orchestrator | 2025-05-28 17:40:05 | INFO  | It takes a moment until task 53d31a92-0e3d-4b7c-b7ea-a0ba706fc80e (bootstrap-basic) has been started and output is visible here. 2025-05-28 17:40:09.654951 | orchestrator | 2025-05-28 17:40:09.656729 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2025-05-28 17:40:09.656838 | orchestrator | 2025-05-28 17:40:09.659093 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-28 17:40:09.660477 | orchestrator | Wednesday 28 May 2025 17:40:09 +0000 (0:00:00.073) 0:00:00.073 ********* 2025-05-28 17:40:11.431748 | orchestrator | ok: [localhost] 2025-05-28 17:40:11.432376 | orchestrator | 2025-05-28 17:40:11.433060 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2025-05-28 17:40:11.434183 | orchestrator | Wednesday 28 May 2025 17:40:11 +0000 (0:00:01.779) 0:00:01.852 ********* 2025-05-28 17:40:20.656076 | orchestrator | ok: [localhost] 2025-05-28 17:40:20.657262 | orchestrator | 2025-05-28 17:40:20.658270 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2025-05-28 17:40:20.658634 | orchestrator | Wednesday 28 May 2025 17:40:20 +0000 (0:00:09.223) 0:00:11.075 ********* 2025-05-28 17:40:27.933924 | orchestrator | changed: [localhost] 2025-05-28 17:40:27.934168 | orchestrator | 2025-05-28 17:40:27.934218 | orchestrator | TASK [Get volume type local] *************************************************** 2025-05-28 17:40:27.934533 | orchestrator | Wednesday 28 May 2025 17:40:27 +0000 (0:00:07.276) 0:00:18.352 ********* 2025-05-28 17:40:34.431158 | orchestrator | ok: [localhost] 2025-05-28 17:40:34.431438 | orchestrator | 2025-05-28 17:40:34.432257 | orchestrator | TASK [Create volume type local] ************************************************ 2025-05-28 17:40:34.434189 | orchestrator | Wednesday 28 May 2025 17:40:34 +0000 (0:00:06.497) 0:00:24.850 ********* 2025-05-28 17:40:40.827577 | orchestrator | changed: [localhost] 2025-05-28 17:40:40.827858 | orchestrator | 2025-05-28 17:40:40.830297 | orchestrator | TASK [Create public network] *************************************************** 2025-05-28 17:40:40.831174 | orchestrator | Wednesday 28 May 2025 17:40:40 +0000 (0:00:06.396) 0:00:31.246 ********* 2025-05-28 17:40:45.741268 | orchestrator | changed: [localhost] 2025-05-28 17:40:45.741811 | orchestrator | 2025-05-28 17:40:45.743029 | orchestrator | TASK [Set public network to default] ******************************************* 2025-05-28 17:40:45.743894 | orchestrator | Wednesday 28 May 2025 17:40:45 +0000 (0:00:04.913) 0:00:36.159 ********* 2025-05-28 17:40:51.691674 | orchestrator | changed: [localhost] 2025-05-28 17:40:51.691819 | orchestrator | 2025-05-28 17:40:51.692335 | orchestrator | TASK [Create public subnet] **************************************************** 2025-05-28 17:40:51.693093 | orchestrator | Wednesday 28 May 2025 17:40:51 +0000 (0:00:05.949) 0:00:42.109 ********* 2025-05-28 17:40:55.886227 | orchestrator | changed: [localhost] 2025-05-28 17:40:55.886353 | orchestrator | 2025-05-28 17:40:55.886369 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2025-05-28 17:40:55.886382 | orchestrator | Wednesday 28 May 2025 17:40:55 +0000 (0:00:04.196) 0:00:46.306 ********* 2025-05-28 17:40:59.607938 | orchestrator | changed: [localhost] 2025-05-28 17:40:59.608886 | orchestrator | 2025-05-28 17:40:59.608918 | orchestrator | TASK [Create manager role] ***************************************************** 2025-05-28 17:40:59.609594 | orchestrator | Wednesday 28 May 2025 17:40:59 +0000 (0:00:03.719) 0:00:50.025 ********* 2025-05-28 17:41:03.055703 | orchestrator | ok: [localhost] 2025-05-28 17:41:03.056605 | orchestrator | 2025-05-28 17:41:03.057399 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 17:41:03.057883 | orchestrator | 2025-05-28 17:41:03 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-28 17:41:03.058231 | orchestrator | 2025-05-28 17:41:03 | INFO  | Please wait and do not abort execution. 2025-05-28 17:41:03.060288 | orchestrator | localhost : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 17:41:03.061524 | orchestrator | 2025-05-28 17:41:03.062546 | orchestrator | 2025-05-28 17:41:03.063124 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 17:41:03.063749 | orchestrator | Wednesday 28 May 2025 17:41:03 +0000 (0:00:03.448) 0:00:53.474 ********* 2025-05-28 17:41:03.064239 | orchestrator | =============================================================================== 2025-05-28 17:41:03.064782 | orchestrator | Get volume type LUKS ---------------------------------------------------- 9.22s 2025-05-28 17:41:03.065718 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.28s 2025-05-28 17:41:03.066104 | orchestrator | Get volume type local --------------------------------------------------- 6.50s 2025-05-28 17:41:03.066455 | orchestrator | Create volume type local ------------------------------------------------ 6.40s 2025-05-28 17:41:03.067242 | orchestrator | Set public network to default ------------------------------------------- 5.95s 2025-05-28 17:41:03.067733 | orchestrator | Create public network --------------------------------------------------- 4.91s 2025-05-28 17:41:03.067962 | orchestrator | Create public subnet ---------------------------------------------------- 4.20s 2025-05-28 17:41:03.068431 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.72s 2025-05-28 17:41:03.068858 | orchestrator | Create manager role ----------------------------------------------------- 3.45s 2025-05-28 17:41:03.069446 | orchestrator | Gathering Facts --------------------------------------------------------- 1.78s 2025-05-28 17:41:05.370107 | orchestrator | 2025-05-28 17:41:05 | INFO  | It takes a moment until task 8be7cc23-68e7-4af0-a41a-1a208f4c91a4 (image-manager) has been started and output is visible here. 2025-05-28 17:41:08.889113 | orchestrator | 2025-05-28 17:41:08 | INFO  | Processing image 'Cirros 0.6.2' 2025-05-28 17:41:09.099822 | orchestrator | 2025-05-28 17:41:09 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2025-05-28 17:41:09.100538 | orchestrator | 2025-05-28 17:41:09 | INFO  | Importing image Cirros 0.6.2 2025-05-28 17:41:09.101420 | orchestrator | 2025-05-28 17:41:09 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-05-28 17:41:10.867956 | orchestrator | 2025-05-28 17:41:10 | INFO  | Waiting for image to leave queued state... 2025-05-28 17:41:12.922292 | orchestrator | 2025-05-28 17:41:12 | INFO  | Waiting for import to complete... 2025-05-28 17:41:23.231262 | orchestrator | 2025-05-28 17:41:23 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2025-05-28 17:41:23.441069 | orchestrator | 2025-05-28 17:41:23 | INFO  | Checking parameters of 'Cirros 0.6.2' 2025-05-28 17:41:23.443306 | orchestrator | 2025-05-28 17:41:23 | INFO  | Setting internal_version = 0.6.2 2025-05-28 17:41:23.444663 | orchestrator | 2025-05-28 17:41:23 | INFO  | Setting image_original_user = cirros 2025-05-28 17:41:23.445767 | orchestrator | 2025-05-28 17:41:23 | INFO  | Adding tag os:cirros 2025-05-28 17:41:23.714720 | orchestrator | 2025-05-28 17:41:23 | INFO  | Setting property architecture: x86_64 2025-05-28 17:41:24.035831 | orchestrator | 2025-05-28 17:41:24 | INFO  | Setting property hw_disk_bus: scsi 2025-05-28 17:41:24.246972 | orchestrator | 2025-05-28 17:41:24 | INFO  | Setting property hw_rng_model: virtio 2025-05-28 17:41:24.457979 | orchestrator | 2025-05-28 17:41:24 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-05-28 17:41:24.686127 | orchestrator | 2025-05-28 17:41:24 | INFO  | Setting property hw_watchdog_action: reset 2025-05-28 17:41:24.894489 | orchestrator | 2025-05-28 17:41:24 | INFO  | Setting property hypervisor_type: qemu 2025-05-28 17:41:25.100347 | orchestrator | 2025-05-28 17:41:25 | INFO  | Setting property os_distro: cirros 2025-05-28 17:41:25.352238 | orchestrator | 2025-05-28 17:41:25 | INFO  | Setting property replace_frequency: never 2025-05-28 17:41:25.572752 | orchestrator | 2025-05-28 17:41:25 | INFO  | Setting property uuid_validity: none 2025-05-28 17:41:25.751763 | orchestrator | 2025-05-28 17:41:25 | INFO  | Setting property provided_until: none 2025-05-28 17:41:25.974887 | orchestrator | 2025-05-28 17:41:25 | INFO  | Setting property image_description: Cirros 2025-05-28 17:41:26.184768 | orchestrator | 2025-05-28 17:41:26 | INFO  | Setting property image_name: Cirros 2025-05-28 17:41:26.361541 | orchestrator | 2025-05-28 17:41:26 | INFO  | Setting property internal_version: 0.6.2 2025-05-28 17:41:26.546592 | orchestrator | 2025-05-28 17:41:26 | INFO  | Setting property image_original_user: cirros 2025-05-28 17:41:26.744416 | orchestrator | 2025-05-28 17:41:26 | INFO  | Setting property os_version: 0.6.2 2025-05-28 17:41:26.936669 | orchestrator | 2025-05-28 17:41:26 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-05-28 17:41:27.157874 | orchestrator | 2025-05-28 17:41:27 | INFO  | Setting property image_build_date: 2023-05-30 2025-05-28 17:41:27.401156 | orchestrator | 2025-05-28 17:41:27 | INFO  | Checking status of 'Cirros 0.6.2' 2025-05-28 17:41:27.402291 | orchestrator | 2025-05-28 17:41:27 | INFO  | Checking visibility of 'Cirros 0.6.2' 2025-05-28 17:41:27.402937 | orchestrator | 2025-05-28 17:41:27 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2025-05-28 17:41:27.597519 | orchestrator | 2025-05-28 17:41:27 | INFO  | Processing image 'Cirros 0.6.3' 2025-05-28 17:41:27.788724 | orchestrator | 2025-05-28 17:41:27 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2025-05-28 17:41:27.788817 | orchestrator | 2025-05-28 17:41:27 | INFO  | Importing image Cirros 0.6.3 2025-05-28 17:41:27.789427 | orchestrator | 2025-05-28 17:41:27 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-05-28 17:41:28.879396 | orchestrator | 2025-05-28 17:41:28 | INFO  | Waiting for image to leave queued state... 2025-05-28 17:41:30.929970 | orchestrator | 2025-05-28 17:41:30 | INFO  | Waiting for import to complete... 2025-05-28 17:41:41.062202 | orchestrator | 2025-05-28 17:41:41 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2025-05-28 17:41:41.298806 | orchestrator | 2025-05-28 17:41:41 | INFO  | Checking parameters of 'Cirros 0.6.3' 2025-05-28 17:41:41.299306 | orchestrator | 2025-05-28 17:41:41 | INFO  | Setting internal_version = 0.6.3 2025-05-28 17:41:41.300270 | orchestrator | 2025-05-28 17:41:41 | INFO  | Setting image_original_user = cirros 2025-05-28 17:41:41.301320 | orchestrator | 2025-05-28 17:41:41 | INFO  | Adding tag os:cirros 2025-05-28 17:41:41.509391 | orchestrator | 2025-05-28 17:41:41 | INFO  | Setting property architecture: x86_64 2025-05-28 17:41:41.692114 | orchestrator | 2025-05-28 17:41:41 | INFO  | Setting property hw_disk_bus: scsi 2025-05-28 17:41:41.903223 | orchestrator | 2025-05-28 17:41:41 | INFO  | Setting property hw_rng_model: virtio 2025-05-28 17:41:42.122320 | orchestrator | 2025-05-28 17:41:42 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-05-28 17:41:42.333979 | orchestrator | 2025-05-28 17:41:42 | INFO  | Setting property hw_watchdog_action: reset 2025-05-28 17:41:42.538412 | orchestrator | 2025-05-28 17:41:42 | INFO  | Setting property hypervisor_type: qemu 2025-05-28 17:41:42.734598 | orchestrator | 2025-05-28 17:41:42 | INFO  | Setting property os_distro: cirros 2025-05-28 17:41:42.957287 | orchestrator | 2025-05-28 17:41:42 | INFO  | Setting property replace_frequency: never 2025-05-28 17:41:43.149078 | orchestrator | 2025-05-28 17:41:43 | INFO  | Setting property uuid_validity: none 2025-05-28 17:41:43.321856 | orchestrator | 2025-05-28 17:41:43 | INFO  | Setting property provided_until: none 2025-05-28 17:41:43.529165 | orchestrator | 2025-05-28 17:41:43 | INFO  | Setting property image_description: Cirros 2025-05-28 17:41:43.948121 | orchestrator | 2025-05-28 17:41:43 | INFO  | Setting property image_name: Cirros 2025-05-28 17:41:44.163705 | orchestrator | 2025-05-28 17:41:44 | INFO  | Setting property internal_version: 0.6.3 2025-05-28 17:41:44.377843 | orchestrator | 2025-05-28 17:41:44 | INFO  | Setting property image_original_user: cirros 2025-05-28 17:41:44.592288 | orchestrator | 2025-05-28 17:41:44 | INFO  | Setting property os_version: 0.6.3 2025-05-28 17:41:44.769805 | orchestrator | 2025-05-28 17:41:44 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-05-28 17:41:44.989045 | orchestrator | 2025-05-28 17:41:44 | INFO  | Setting property image_build_date: 2024-09-26 2025-05-28 17:41:45.178589 | orchestrator | 2025-05-28 17:41:45 | INFO  | Checking status of 'Cirros 0.6.3' 2025-05-28 17:41:45.179397 | orchestrator | 2025-05-28 17:41:45 | INFO  | Checking visibility of 'Cirros 0.6.3' 2025-05-28 17:41:45.180547 | orchestrator | 2025-05-28 17:41:45 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2025-05-28 17:41:46.136384 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2025-05-28 17:41:48.090333 | orchestrator | 2025-05-28 17:41:48 | INFO  | date: 2025-05-28 2025-05-28 17:41:48.090488 | orchestrator | 2025-05-28 17:41:48 | INFO  | image: octavia-amphora-haproxy-2024.2.20250528.qcow2 2025-05-28 17:41:48.090507 | orchestrator | 2025-05-28 17:41:48 | INFO  | url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250528.qcow2 2025-05-28 17:41:48.092625 | orchestrator | 2025-05-28 17:41:48 | INFO  | checksum_url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250528.qcow2.CHECKSUM 2025-05-28 17:41:48.138237 | orchestrator | 2025-05-28 17:41:48 | INFO  | checksum: f87aea1a9ed4c5e9a1e0fe83a3d719b20b5e2c46adfc5e877f1f142d2481fc9a 2025-05-28 17:41:48.219136 | orchestrator | 2025-05-28 17:41:48 | INFO  | It takes a moment until task 5aa31b74-87e2-4ce1-8d10-78f26717f67c (image-manager) has been started and output is visible here. 2025-05-28 17:41:48.435088 | orchestrator | /usr/local/lib/python3.13/site-packages/openstack_image_manager/__init__.py:5: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-05-28 17:41:48.436433 | orchestrator | from pkg_resources import get_distribution, DistributionNotFound 2025-05-28 17:41:50.064171 | orchestrator | 2025-05-28 17:41:50 | INFO  | Processing image 'OpenStack Octavia Amphora 2025-05-28' 2025-05-28 17:41:50.082506 | orchestrator | 2025-05-28 17:41:50 | INFO  | Tested URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250528.qcow2: 200 2025-05-28 17:41:50.083969 | orchestrator | 2025-05-28 17:41:50 | INFO  | Importing image OpenStack Octavia Amphora 2025-05-28 2025-05-28 17:41:50.085136 | orchestrator | 2025-05-28 17:41:50 | INFO  | Importing from URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250528.qcow2 2025-05-28 17:41:51.217267 | orchestrator | 2025-05-28 17:41:51 | INFO  | Waiting for image to leave queued state... 2025-05-28 17:41:53.247979 | orchestrator | 2025-05-28 17:41:53 | INFO  | Waiting for import to complete... 2025-05-28 17:42:03.337559 | orchestrator | 2025-05-28 17:42:03 | INFO  | Waiting for import to complete... 2025-05-28 17:42:13.688632 | orchestrator | 2025-05-28 17:42:13 | INFO  | Waiting for import to complete... 2025-05-28 17:42:23.768775 | orchestrator | 2025-05-28 17:42:23 | INFO  | Waiting for import to complete... 2025-05-28 17:42:33.872972 | orchestrator | 2025-05-28 17:42:33 | INFO  | Waiting for import to complete... 2025-05-28 17:42:44.187755 | orchestrator | 2025-05-28 17:42:44 | INFO  | Import of 'OpenStack Octavia Amphora 2025-05-28' successfully completed, reloading images 2025-05-28 17:42:44.518245 | orchestrator | 2025-05-28 17:42:44 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2025-05-28' 2025-05-28 17:42:44.518817 | orchestrator | 2025-05-28 17:42:44 | INFO  | Setting internal_version = 2025-05-28 2025-05-28 17:42:44.519662 | orchestrator | 2025-05-28 17:42:44 | INFO  | Setting image_original_user = ubuntu 2025-05-28 17:42:44.520678 | orchestrator | 2025-05-28 17:42:44 | INFO  | Adding tag amphora 2025-05-28 17:42:44.870406 | orchestrator | 2025-05-28 17:42:44 | INFO  | Adding tag os:ubuntu 2025-05-28 17:42:45.112284 | orchestrator | 2025-05-28 17:42:45 | INFO  | Setting property architecture: x86_64 2025-05-28 17:42:45.309234 | orchestrator | 2025-05-28 17:42:45 | INFO  | Setting property hw_disk_bus: scsi 2025-05-28 17:42:45.504716 | orchestrator | 2025-05-28 17:42:45 | INFO  | Setting property hw_rng_model: virtio 2025-05-28 17:42:45.701337 | orchestrator | 2025-05-28 17:42:45 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-05-28 17:42:45.907705 | orchestrator | 2025-05-28 17:42:45 | INFO  | Setting property hw_watchdog_action: reset 2025-05-28 17:42:46.090492 | orchestrator | 2025-05-28 17:42:46 | INFO  | Setting property hypervisor_type: qemu 2025-05-28 17:42:46.262880 | orchestrator | 2025-05-28 17:42:46 | INFO  | Setting property os_distro: ubuntu 2025-05-28 17:42:46.434611 | orchestrator | 2025-05-28 17:42:46 | INFO  | Setting property replace_frequency: quarterly 2025-05-28 17:42:46.603542 | orchestrator | 2025-05-28 17:42:46 | INFO  | Setting property uuid_validity: last-1 2025-05-28 17:42:46.777942 | orchestrator | 2025-05-28 17:42:46 | INFO  | Setting property provided_until: none 2025-05-28 17:42:46.970990 | orchestrator | 2025-05-28 17:42:46 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2025-05-28 17:42:47.134904 | orchestrator | 2025-05-28 17:42:47 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2025-05-28 17:42:47.282180 | orchestrator | 2025-05-28 17:42:47 | INFO  | Setting property internal_version: 2025-05-28 2025-05-28 17:42:47.445442 | orchestrator | 2025-05-28 17:42:47 | INFO  | Setting property image_original_user: ubuntu 2025-05-28 17:42:47.633892 | orchestrator | 2025-05-28 17:42:47 | INFO  | Setting property os_version: 2025-05-28 2025-05-28 17:42:47.790787 | orchestrator | 2025-05-28 17:42:47 | INFO  | Setting property image_source: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250528.qcow2 2025-05-28 17:42:47.960087 | orchestrator | 2025-05-28 17:42:47 | INFO  | Setting property image_build_date: 2025-05-28 2025-05-28 17:42:48.140364 | orchestrator | 2025-05-28 17:42:48 | INFO  | Checking status of 'OpenStack Octavia Amphora 2025-05-28' 2025-05-28 17:42:48.140631 | orchestrator | 2025-05-28 17:42:48 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2025-05-28' 2025-05-28 17:42:48.300139 | orchestrator | 2025-05-28 17:42:48 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2025-05-28 17:42:48.300701 | orchestrator | 2025-05-28 17:42:48 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2025-05-28 17:42:48.301293 | orchestrator | 2025-05-28 17:42:48 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2025-05-28 17:42:48.302284 | orchestrator | 2025-05-28 17:42:48 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2025-05-28 17:42:48.818722 | orchestrator | ok: Runtime: 0:02:55.729539 2025-05-28 17:42:48.846857 | 2025-05-28 17:42:48.847013 | TASK [Run checks] 2025-05-28 17:42:49.672928 | orchestrator | + set -e 2025-05-28 17:42:49.673189 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-28 17:42:49.673222 | orchestrator | ++ export INTERACTIVE=false 2025-05-28 17:42:49.673244 | orchestrator | ++ INTERACTIVE=false 2025-05-28 17:42:49.673258 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-28 17:42:49.673270 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-28 17:42:49.673285 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-05-28 17:42:49.673875 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-05-28 17:42:49.680288 | orchestrator | 2025-05-28 17:42:49.680360 | orchestrator | # CHECK 2025-05-28 17:42:49.680371 | orchestrator | 2025-05-28 17:42:49.680380 | orchestrator | ++ export MANAGER_VERSION=latest 2025-05-28 17:42:49.680393 | orchestrator | ++ MANAGER_VERSION=latest 2025-05-28 17:42:49.680402 | orchestrator | + echo 2025-05-28 17:42:49.680410 | orchestrator | + echo '# CHECK' 2025-05-28 17:42:49.680418 | orchestrator | + echo 2025-05-28 17:42:49.680431 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-05-28 17:42:49.681377 | orchestrator | ++ semver latest 5.0.0 2025-05-28 17:42:49.746120 | orchestrator | 2025-05-28 17:42:49.746241 | orchestrator | ## Containers @ testbed-manager 2025-05-28 17:42:49.746262 | orchestrator | 2025-05-28 17:42:49.746285 | orchestrator | + [[ -1 -eq -1 ]] 2025-05-28 17:42:49.746304 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-05-28 17:42:49.746325 | orchestrator | + echo 2025-05-28 17:42:49.746346 | orchestrator | + echo '## Containers @ testbed-manager' 2025-05-28 17:42:49.746366 | orchestrator | + echo 2025-05-28 17:42:49.746384 | orchestrator | + osism container testbed-manager ps 2025-05-28 17:42:51.846543 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-05-28 17:42:51.846723 | orchestrator | 2a637c4e0c75 registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_blackbox_exporter 2025-05-28 17:42:51.846749 | orchestrator | 3daae859c138 registry.osism.tech/kolla/prometheus-alertmanager:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_alertmanager 2025-05-28 17:42:51.846762 | orchestrator | 3f98c6ed75ee registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 14 minutes ago Up 13 minutes prometheus_cadvisor 2025-05-28 17:42:51.846780 | orchestrator | 29805f81f2e9 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2025-05-28 17:42:51.846792 | orchestrator | 2eb85dbdb606 registry.osism.tech/kolla/prometheus-v2-server:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_server 2025-05-28 17:42:51.846810 | orchestrator | 87b1361b237a registry.osism.tech/osism/cephclient:reef "/usr/bin/dumb-init …" 17 minutes ago Up 16 minutes cephclient 2025-05-28 17:42:51.846822 | orchestrator | 031d5d954d71 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes cron 2025-05-28 17:42:51.846834 | orchestrator | 9f26b9659311 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes kolla_toolbox 2025-05-28 17:42:51.846845 | orchestrator | ca9f959517fd registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes fluentd 2025-05-28 17:42:51.846883 | orchestrator | 3642a7f17b64 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 29 minutes ago Up 29 minutes (healthy) 80/tcp phpmyadmin 2025-05-28 17:42:51.846895 | orchestrator | 2ce3317adb79 registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 31 minutes ago Up 30 minutes openstackclient 2025-05-28 17:42:51.846906 | orchestrator | e73d53cbc512 registry.osism.tech/osism/homer:v25.05.2 "/bin/sh /entrypoint…" 31 minutes ago Up 30 minutes (healthy) 8080/tcp homer 2025-05-28 17:42:51.846917 | orchestrator | f9ef23dd3cbf registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" 37 minutes ago Up 37 minutes (healthy) manager-inventory_reconciler-1 2025-05-28 17:42:51.846929 | orchestrator | 4b1dc40606ad registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 51 minutes ago Up 50 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2025-05-28 17:42:51.846939 | orchestrator | 1492c32f6ffe registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" 54 minutes ago Up 54 minutes (healthy) osism-ansible 2025-05-28 17:42:51.846972 | orchestrator | c5712d0602c7 registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" 54 minutes ago Up 54 minutes (healthy) ceph-ansible 2025-05-28 17:42:51.846990 | orchestrator | 618a8542bfc4 registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" 54 minutes ago Up 54 minutes (healthy) osism-kubernetes 2025-05-28 17:42:51.847001 | orchestrator | 368334460db9 registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" 54 minutes ago Up 54 minutes (healthy) kolla-ansible 2025-05-28 17:42:51.847012 | orchestrator | 58a7219082b9 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" 54 minutes ago Up 54 minutes (healthy) 8000/tcp manager-ara-server-1 2025-05-28 17:42:51.847023 | orchestrator | 0dd1238111cb registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 54 minutes ago Up 54 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2025-05-28 17:42:51.847034 | orchestrator | 60dd8fe98039 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 54 minutes ago Up 54 minutes (healthy) manager-beat-1 2025-05-28 17:42:51.847045 | orchestrator | 6add43a1e957 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 54 minutes ago Up 54 minutes (healthy) manager-netbox-1 2025-05-28 17:42:51.847056 | orchestrator | 31ba877d84c4 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 54 minutes ago Up 54 minutes (healthy) manager-flower-1 2025-05-28 17:42:51.847075 | orchestrator | a864df57f661 registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" 54 minutes ago Up 54 minutes (healthy) osismclient 2025-05-28 17:42:51.847086 | orchestrator | cd08cb0a7f8d registry.osism.tech/dockerhub/library/redis:7.4.3-alpine "docker-entrypoint.s…" 54 minutes ago Up 54 minutes (healthy) 6379/tcp manager-redis-1 2025-05-28 17:42:51.847097 | orchestrator | af6b12be0d8a registry.osism.tech/dockerhub/library/mariadb:11.7.2 "docker-entrypoint.s…" 54 minutes ago Up 54 minutes (healthy) 3306/tcp manager-mariadb-1 2025-05-28 17:42:51.847108 | orchestrator | adb61850e812 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 54 minutes ago Up 54 minutes (healthy) manager-openstack-1 2025-05-28 17:42:51.847119 | orchestrator | 9d300749cdb3 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 54 minutes ago Up 54 minutes (healthy) manager-watchdog-1 2025-05-28 17:42:51.847130 | orchestrator | b8fa2ce67e75 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 54 minutes ago Up 54 minutes (healthy) manager-listener-1 2025-05-28 17:42:51.847141 | orchestrator | 0668cc67e38d registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 54 minutes ago Up 54 minutes (healthy) manager-conductor-1 2025-05-28 17:42:51.847157 | orchestrator | 7d514a92bcb4 registry.osism.tech/osism/netbox:v4.2.2 "/opt/netbox/venv/bi…" About an hour ago Up 56 minutes (healthy) netbox-netbox-worker-1 2025-05-28 17:42:51.847180 | orchestrator | 5fa19ad167a3 registry.osism.tech/osism/netbox:v4.2.2 "/usr/bin/tini -- /o…" About an hour ago Up About an hour (healthy) netbox-netbox-1 2025-05-28 17:42:51.847192 | orchestrator | 3f47e52d7f06 registry.osism.tech/dockerhub/library/postgres:16.9-alpine "docker-entrypoint.s…" About an hour ago Up About an hour (healthy) 5432/tcp netbox-postgres-1 2025-05-28 17:42:51.847203 | orchestrator | 8922247fd49e registry.osism.tech/dockerhub/library/redis:7.4.3-alpine "docker-entrypoint.s…" About an hour ago Up About an hour (healthy) 6379/tcp netbox-redis-1 2025-05-28 17:42:51.847214 | orchestrator | 1082d9599dce registry.osism.tech/dockerhub/library/traefik:v3.4.0 "/entrypoint.sh trae…" About an hour ago Up About an hour (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2025-05-28 17:42:52.079929 | orchestrator | 2025-05-28 17:42:52.080061 | orchestrator | ## Images @ testbed-manager 2025-05-28 17:42:52.080078 | orchestrator | 2025-05-28 17:42:52.080091 | orchestrator | + echo 2025-05-28 17:42:52.080102 | orchestrator | + echo '## Images @ testbed-manager' 2025-05-28 17:42:52.080115 | orchestrator | + echo 2025-05-28 17:42:52.080129 | orchestrator | + osism container testbed-manager images 2025-05-28 17:42:54.032823 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-05-28 17:42:54.032924 | orchestrator | registry.osism.tech/osism/inventory-reconciler latest 5e2e0827ec6c 48 minutes ago 308MB 2025-05-28 17:42:54.032940 | orchestrator | registry.osism.tech/osism/osism latest bc90da6792a4 About an hour ago 297MB 2025-05-28 17:42:54.032977 | orchestrator | registry.osism.tech/osism/inventory-reconciler d13f104e02d4 5 hours ago 308MB 2025-05-28 17:42:54.032989 | orchestrator | registry.osism.tech/osism/osism-ansible latest 571ff05b0796 5 hours ago 577MB 2025-05-28 17:42:54.032999 | orchestrator | registry.osism.tech/osism/homer v25.05.2 05829bdea345 14 hours ago 11MB 2025-05-28 17:42:54.033010 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 c17ed8749fc2 14 hours ago 225MB 2025-05-28 17:42:54.033021 | orchestrator | registry.osism.tech/osism/cephclient reef 81792de3b83b 14 hours ago 453MB 2025-05-28 17:42:54.033032 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 8def5b434a82 16 hours ago 628MB 2025-05-28 17:42:54.033042 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 fb20b95d5799 16 hours ago 746MB 2025-05-28 17:42:54.033054 | orchestrator | registry.osism.tech/kolla/cron 2024.2 b5a3b35ecfe2 16 hours ago 318MB 2025-05-28 17:42:54.033065 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 1a459c88c97a 16 hours ago 410MB 2025-05-28 17:42:54.033076 | orchestrator | registry.osism.tech/kolla/prometheus-v2-server 2024.2 119b2d4cecb4 16 hours ago 891MB 2025-05-28 17:42:54.033087 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 12dd06a3f982 16 hours ago 358MB 2025-05-28 17:42:54.033097 | orchestrator | registry.osism.tech/kolla/prometheus-alertmanager 2024.2 3466736047cf 16 hours ago 456MB 2025-05-28 17:42:54.033108 | orchestrator | registry.osism.tech/kolla/prometheus-blackbox-exporter 2024.2 a9e2a6bb489c 16 hours ago 360MB 2025-05-28 17:42:54.033118 | orchestrator | registry.osism.tech/osism/ceph-ansible reef b49c5c255f89 18 hours ago 538MB 2025-05-28 17:42:54.033129 | orchestrator | registry.osism.tech/osism/kolla-ansible 2024.2 7fa40533f535 18 hours ago 574MB 2025-05-28 17:42:54.033140 | orchestrator | registry.osism.tech/osism/osism-kubernetes latest 123254cc0800 18 hours ago 1.2GB 2025-05-28 17:42:54.033150 | orchestrator | registry.osism.tech/dockerhub/library/postgres 16.9-alpine b56133b65cd3 2 weeks ago 275MB 2025-05-28 17:42:54.033161 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.4.0 79e66182ffbe 3 weeks ago 224MB 2025-05-28 17:42:54.033172 | orchestrator | registry.osism.tech/dockerhub/hashicorp/vault 1.19.3 272792d172e0 4 weeks ago 504MB 2025-05-28 17:42:54.033183 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.3-alpine 9a07b03a1871 4 weeks ago 41.4MB 2025-05-28 17:42:54.033209 | orchestrator | registry.osism.tech/osism/netbox v4.2.2 de0f89b61971 8 weeks ago 817MB 2025-05-28 17:42:54.033220 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.7.2 4815a3e162ea 3 months ago 328MB 2025-05-28 17:42:54.033231 | orchestrator | phpmyadmin/phpmyadmin 5.2 0276a66ce322 4 months ago 571MB 2025-05-28 17:42:54.033242 | orchestrator | registry.osism.tech/osism/ara-server 1.7.2 bb44122eb176 8 months ago 300MB 2025-05-28 17:42:54.033252 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 11 months ago 146MB 2025-05-28 17:42:54.291372 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-05-28 17:42:54.292035 | orchestrator | ++ semver latest 5.0.0 2025-05-28 17:42:54.339060 | orchestrator | 2025-05-28 17:42:54.339147 | orchestrator | ## Containers @ testbed-node-0 2025-05-28 17:42:54.339161 | orchestrator | 2025-05-28 17:42:54.339173 | orchestrator | + [[ -1 -eq -1 ]] 2025-05-28 17:42:54.339183 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-05-28 17:42:54.339194 | orchestrator | + echo 2025-05-28 17:42:54.339235 | orchestrator | + echo '## Containers @ testbed-node-0' 2025-05-28 17:42:54.339247 | orchestrator | + echo 2025-05-28 17:42:54.339258 | orchestrator | + osism container testbed-node-0 ps 2025-05-28 17:42:56.504801 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-05-28 17:42:56.504922 | orchestrator | 3236ebb62469 registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-05-28 17:42:56.504939 | orchestrator | 27d8d1938e91 registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-05-28 17:42:56.504951 | orchestrator | 49dce4484d50 registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-05-28 17:42:56.504964 | orchestrator | 92f54f31cb4c registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes octavia_driver_agent 2025-05-28 17:42:56.504975 | orchestrator | 9c41e90e7672 registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2025-05-28 17:42:56.504986 | orchestrator | 6171c0846f97 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_conductor 2025-05-28 17:42:56.504997 | orchestrator | c3b150babaf7 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_api 2025-05-28 17:42:56.505007 | orchestrator | 224479286f4e registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2025-05-28 17:42:56.505018 | orchestrator | e5364725ec89 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2025-05-28 17:42:56.505029 | orchestrator | 9dc006c5bb75 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_worker 2025-05-28 17:42:56.505039 | orchestrator | f3e14523b891 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_mdns 2025-05-28 17:42:56.505050 | orchestrator | ab5d2979cfbc registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_novncproxy 2025-05-28 17:42:56.505060 | orchestrator | e1c739032637 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_producer 2025-05-28 17:42:56.505071 | orchestrator | d2ad02159764 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) neutron_server 2025-05-28 17:42:56.505098 | orchestrator | 6d5bf74516ea registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 10 minutes ago Up 9 minutes (healthy) nova_conductor 2025-05-28 17:42:56.505110 | orchestrator | 85c60ec7ff5a registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2025-05-28 17:42:56.505121 | orchestrator | 34af34b48a01 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2025-05-28 17:42:56.505136 | orchestrator | 5e57bcfaeba2 registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2025-05-28 17:42:56.505166 | orchestrator | 2fa19c06b67d registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2025-05-28 17:42:56.505177 | orchestrator | ad56b8f70d0d registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2025-05-28 17:42:56.505188 | orchestrator | 817c7159e9ee registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_api 2025-05-28 17:42:56.505218 | orchestrator | f03243e613fa registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) nova_api 2025-05-28 17:42:56.505230 | orchestrator | 9454827735d5 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 12 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-05-28 17:42:56.505241 | orchestrator | ff0aab94dca1 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_elasticsearch_exporter 2025-05-28 17:42:56.505252 | orchestrator | b1749db52a12 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2025-05-28 17:42:56.505263 | orchestrator | 2f75f7b7f692 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) glance_api 2025-05-28 17:42:56.505274 | orchestrator | 762e091676a5 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_memcached_exporter 2025-05-28 17:42:56.505285 | orchestrator | a4b55662ec5d registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_scheduler 2025-05-28 17:42:56.505295 | orchestrator | 28dfbe6cf2e6 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_mysqld_exporter 2025-05-28 17:42:56.505306 | orchestrator | f17fa39ca745 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2025-05-28 17:42:56.505317 | orchestrator | e330afb1661f registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_api 2025-05-28 17:42:56.505327 | orchestrator | 637f58677deb registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-0 2025-05-28 17:42:56.505338 | orchestrator | c1686aeb36e9 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone 2025-05-28 17:42:56.505349 | orchestrator | c1d56a982d69 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_fernet 2025-05-28 17:42:56.505360 | orchestrator | 3c30f7684421 registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_ssh 2025-05-28 17:42:56.505370 | orchestrator | b50f314e5777 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) horizon 2025-05-28 17:42:56.505381 | orchestrator | bd1d69c49f21 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 19 minutes ago Up 19 minutes (healthy) mariadb 2025-05-28 17:42:56.505392 | orchestrator | 30af84885a8a registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch_dashboards 2025-05-28 17:42:56.505410 | orchestrator | b62180f7c255 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch 2025-05-28 17:42:56.505426 | orchestrator | 4d20db894807 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 22 minutes ago Up 22 minutes ceph-crash-testbed-node-0 2025-05-28 17:42:56.505437 | orchestrator | 05b4806c8c04 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes keepalived 2025-05-28 17:42:56.505448 | orchestrator | 571a2be7da1e registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) proxysql 2025-05-28 17:42:56.505458 | orchestrator | ec7d6bcbd819 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) haproxy 2025-05-28 17:42:56.505469 | orchestrator | 02746e03e2c3 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_northd 2025-05-28 17:42:56.505486 | orchestrator | a7ef04d3d32c registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_sb_db 2025-05-28 17:42:56.505498 | orchestrator | 66f51638bc8a registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 26 minutes ago Up 25 minutes ovn_nb_db 2025-05-28 17:42:56.505509 | orchestrator | d1d083295784 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 27 minutes ago Up 26 minutes ovn_controller 2025-05-28 17:42:56.505519 | orchestrator | a943adc42ebb registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 27 minutes ago Up 27 minutes ceph-mon-testbed-node-0 2025-05-28 17:42:56.505530 | orchestrator | 10b2d5a4a88b registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) rabbitmq 2025-05-28 17:42:56.505540 | orchestrator | de2ee650b499 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 28 minutes ago Up 27 minutes (healthy) openvswitch_vswitchd 2025-05-28 17:42:56.505551 | orchestrator | 9416cc9f7276 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) redis_sentinel 2025-05-28 17:42:56.505561 | orchestrator | c0d0035012d8 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) openvswitch_db 2025-05-28 17:42:56.505572 | orchestrator | af988e874153 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) redis 2025-05-28 17:42:56.505582 | orchestrator | e04b2b9e90e4 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) memcached 2025-05-28 17:42:56.505625 | orchestrator | a5cebe8257f8 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes cron 2025-05-28 17:42:56.505637 | orchestrator | a8e41751bf7b registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes kolla_toolbox 2025-05-28 17:42:56.505648 | orchestrator | 68de7b930b52 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes fluentd 2025-05-28 17:42:56.745525 | orchestrator | 2025-05-28 17:42:56.745667 | orchestrator | ## Images @ testbed-node-0 2025-05-28 17:42:56.745678 | orchestrator | 2025-05-28 17:42:56.745683 | orchestrator | + echo 2025-05-28 17:42:56.745689 | orchestrator | + echo '## Images @ testbed-node-0' 2025-05-28 17:42:56.745695 | orchestrator | + echo 2025-05-28 17:42:56.745700 | orchestrator | + osism container testbed-node-0 images 2025-05-28 17:42:58.823843 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-05-28 17:42:58.823973 | orchestrator | registry.osism.tech/osism/ceph-daemon reef d68731bce62a 14 hours ago 1.27GB 2025-05-28 17:42:58.823990 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 a84b55c0d7e2 16 hours ago 375MB 2025-05-28 17:42:58.824002 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 609b84a264a7 16 hours ago 1.59GB 2025-05-28 17:42:58.824013 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 400f3edd5387 16 hours ago 1.55GB 2025-05-28 17:42:58.824024 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 639cfade0c7e 16 hours ago 326MB 2025-05-28 17:42:58.824034 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 8def5b434a82 16 hours ago 628MB 2025-05-28 17:42:58.824045 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 c153d6307ebd 16 hours ago 318MB 2025-05-28 17:42:58.824055 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 fb20b95d5799 16 hours ago 746MB 2025-05-28 17:42:58.824066 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 52bf6df12913 16 hours ago 1.01GB 2025-05-28 17:42:58.824077 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 fa9f3ff7f637 16 hours ago 329MB 2025-05-28 17:42:58.824087 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 ee146a727a54 16 hours ago 417MB 2025-05-28 17:42:58.824098 | orchestrator | registry.osism.tech/kolla/cron 2024.2 b5a3b35ecfe2 16 hours ago 318MB 2025-05-28 17:42:58.824109 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 1b1b3245cdc9 16 hours ago 351MB 2025-05-28 17:42:58.824119 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 1a459c88c97a 16 hours ago 410MB 2025-05-28 17:42:58.824130 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 12dd06a3f982 16 hours ago 358MB 2025-05-28 17:42:58.824140 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 3bcaa1b157c1 16 hours ago 353MB 2025-05-28 17:42:58.824151 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 e22dcb22e98b 16 hours ago 344MB 2025-05-28 17:42:58.824162 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 932290e1c405 16 hours ago 361MB 2025-05-28 17:42:58.824172 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 e84962bbdc78 16 hours ago 361MB 2025-05-28 17:42:58.824183 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 db1e22cb5a65 16 hours ago 590MB 2025-05-28 17:42:58.824194 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 9292008bd508 16 hours ago 324MB 2025-05-28 17:42:58.824209 | orchestrator | registry.osism.tech/kolla/redis 2024.2 bdd8a8d80398 16 hours ago 324MB 2025-05-28 17:42:58.824219 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 086f0a82b9cd 16 hours ago 1.21GB 2025-05-28 17:42:58.824230 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 ce955ab1e21d 16 hours ago 947MB 2025-05-28 17:42:58.824241 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 1e09d922d303 16 hours ago 946MB 2025-05-28 17:42:58.824252 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 8081a327ea05 16 hours ago 946MB 2025-05-28 17:42:58.824262 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 557562e5a1f1 16 hours ago 947MB 2025-05-28 17:42:58.824300 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 4e1fa6e3e8ec 16 hours ago 1.06GB 2025-05-28 17:42:58.824311 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 e3ba3bc91014 16 hours ago 1.06GB 2025-05-28 17:42:58.824322 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 efbaa23cd1be 16 hours ago 1.06GB 2025-05-28 17:42:58.824333 | orchestrator | registry.osism.tech/kolla/ceilometer-central 2024.2 5e68c08b1b45 16 hours ago 1.04GB 2025-05-28 17:42:58.824343 | orchestrator | registry.osism.tech/kolla/ceilometer-notification 2024.2 0535ffcf0855 16 hours ago 1.04GB 2025-05-28 17:42:58.824354 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 2e59a6427b26 16 hours ago 1.1GB 2025-05-28 17:42:58.824364 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 255ac3b67147 16 hours ago 1.1GB 2025-05-28 17:42:58.824375 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 0a49bc6995a1 16 hours ago 1.1GB 2025-05-28 17:42:58.824385 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 39ea15b02c36 16 hours ago 1.12GB 2025-05-28 17:42:58.824425 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 ec8dabe7ff26 16 hours ago 1.12GB 2025-05-28 17:42:58.824439 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 009c2d17fe06 16 hours ago 1.05GB 2025-05-28 17:42:58.824451 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 f27a2d5666c7 16 hours ago 1.05GB 2025-05-28 17:42:58.824463 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 424266eb2999 16 hours ago 1.05GB 2025-05-28 17:42:58.824475 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 a6e8b2b20454 16 hours ago 1.06GB 2025-05-28 17:42:58.824488 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 ab81265c7f1b 16 hours ago 1.06GB 2025-05-28 17:42:58.824500 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 b78748c34777 16 hours ago 1.05GB 2025-05-28 17:42:58.824512 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 df7de5c04030 16 hours ago 1.42GB 2025-05-28 17:42:58.824524 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 9698dffc368f 16 hours ago 1.29GB 2025-05-28 17:42:58.824536 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 c8e51d387157 16 hours ago 1.29GB 2025-05-28 17:42:58.824548 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 946668927342 16 hours ago 1.29GB 2025-05-28 17:42:58.824560 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 cad5e686b4d0 16 hours ago 1.41GB 2025-05-28 17:42:58.824572 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 a5befc91d7f8 16 hours ago 1.41GB 2025-05-28 17:42:58.824584 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 37717431f416 16 hours ago 1.11GB 2025-05-28 17:42:58.824621 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 96f2e4852e98 16 hours ago 1.13GB 2025-05-28 17:42:58.824635 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 6c8ae8d6c78c 16 hours ago 1.11GB 2025-05-28 17:42:58.824659 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 94442865f41b 16 hours ago 1.2GB 2025-05-28 17:42:58.824672 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 c0a9896b172c 16 hours ago 1.31GB 2025-05-28 17:42:58.824684 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 4ffa94b491be 16 hours ago 1.15GB 2025-05-28 17:42:58.824697 | orchestrator | registry.osism.tech/kolla/skyline-apiserver 2024.2 720ffe3f039d 16 hours ago 1.11GB 2025-05-28 17:42:58.824709 | orchestrator | registry.osism.tech/kolla/skyline-console 2024.2 cd54b415e399 16 hours ago 1.11GB 2025-05-28 17:42:58.824729 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 eef743fc2e82 16 hours ago 1.24GB 2025-05-28 17:42:58.824742 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 26f9b2842ab8 16 hours ago 1.04GB 2025-05-28 17:42:58.824754 | orchestrator | registry.osism.tech/kolla/aodh-listener 2024.2 e12a148e3907 16 hours ago 1.04GB 2025-05-28 17:42:58.824765 | orchestrator | registry.osism.tech/kolla/aodh-evaluator 2024.2 5bc82cab8e50 16 hours ago 1.04GB 2025-05-28 17:42:58.824776 | orchestrator | registry.osism.tech/kolla/aodh-api 2024.2 901e8871dcfe 16 hours ago 1.04GB 2025-05-28 17:42:58.824787 | orchestrator | registry.osism.tech/kolla/aodh-notifier 2024.2 21685e8b3a1d 16 hours ago 1.04GB 2025-05-28 17:42:59.067527 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-05-28 17:42:59.067883 | orchestrator | ++ semver latest 5.0.0 2025-05-28 17:42:59.120057 | orchestrator | 2025-05-28 17:42:59.120160 | orchestrator | ## Containers @ testbed-node-1 2025-05-28 17:42:59.120173 | orchestrator | 2025-05-28 17:42:59.120183 | orchestrator | + [[ -1 -eq -1 ]] 2025-05-28 17:42:59.120193 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-05-28 17:42:59.120203 | orchestrator | + echo 2025-05-28 17:42:59.120213 | orchestrator | + echo '## Containers @ testbed-node-1' 2025-05-28 17:42:59.120224 | orchestrator | + echo 2025-05-28 17:42:59.120234 | orchestrator | + osism container testbed-node-1 ps 2025-05-28 17:43:01.285185 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-05-28 17:43:01.285320 | orchestrator | f1d5d3b518a4 registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-05-28 17:43:01.285338 | orchestrator | 8f0d90cf8b5f registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-05-28 17:43:01.285350 | orchestrator | 3173e7424e1b registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-05-28 17:43:01.285362 | orchestrator | 11410cb86eb1 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes octavia_driver_agent 2025-05-28 17:43:01.285379 | orchestrator | 591c8fadd931 registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2025-05-28 17:43:01.285391 | orchestrator | 6edd25b3e5ba registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2025-05-28 17:43:01.285402 | orchestrator | 026ab3313ddf registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_conductor 2025-05-28 17:43:01.285412 | orchestrator | 09ffa0c11a50 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_api 2025-05-28 17:43:01.285423 | orchestrator | 557637ec4932 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2025-05-28 17:43:01.285435 | orchestrator | 165247d6931f registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_worker 2025-05-28 17:43:01.285446 | orchestrator | 065bf537e7cc registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_novncproxy 2025-05-28 17:43:01.285456 | orchestrator | a7c17ede7349 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_mdns 2025-05-28 17:43:01.285519 | orchestrator | 93504e7b76a6 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) neutron_server 2025-05-28 17:43:01.285531 | orchestrator | 8214211df711 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_producer 2025-05-28 17:43:01.285542 | orchestrator | fe778e3a9ed2 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 10 minutes ago Up 9 minutes (healthy) nova_conductor 2025-05-28 17:43:01.285553 | orchestrator | a3f61b8c3707 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2025-05-28 17:43:01.285564 | orchestrator | 8a3a0d93271c registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2025-05-28 17:43:01.285575 | orchestrator | 92f0701ebb5f registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2025-05-28 17:43:01.285585 | orchestrator | 1b602bbf1eb7 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2025-05-28 17:43:01.285596 | orchestrator | fb57b673d396 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2025-05-28 17:43:01.285675 | orchestrator | 33b557a75bfa registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_api 2025-05-28 17:43:01.285710 | orchestrator | 00af8d43ab9d registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) nova_api 2025-05-28 17:43:01.285725 | orchestrator | 6fe42f76a3d7 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 12 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-05-28 17:43:01.285738 | orchestrator | 691d5535bda0 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_elasticsearch_exporter 2025-05-28 17:43:01.285752 | orchestrator | 09ba28ad485c registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) glance_api 2025-05-28 17:43:01.285765 | orchestrator | 16fe7724ad3e registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2025-05-28 17:43:01.285778 | orchestrator | 195184db1849 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_memcached_exporter 2025-05-28 17:43:01.285790 | orchestrator | 65aa265d1ecd registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_scheduler 2025-05-28 17:43:01.285802 | orchestrator | 837089d26706 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_api 2025-05-28 17:43:01.285815 | orchestrator | b45d3a2a07f4 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_mysqld_exporter 2025-05-28 17:43:01.285828 | orchestrator | 73e8ded3e6cf registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2025-05-28 17:43:01.285849 | orchestrator | 202b9c653a36 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-1 2025-05-28 17:43:01.285862 | orchestrator | 918f6f72ac92 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone 2025-05-28 17:43:01.285875 | orchestrator | 6c04a10265ba registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) horizon 2025-05-28 17:43:01.285887 | orchestrator | c07fcc6be611 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_fernet 2025-05-28 17:43:01.285898 | orchestrator | 87479dd5b8ec registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_ssh 2025-05-28 17:43:01.285920 | orchestrator | 75ec7e08f1ab registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) opensearch_dashboards 2025-05-28 17:43:01.285932 | orchestrator | 1e29d940108d registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 21 minutes ago Up 21 minutes (healthy) mariadb 2025-05-28 17:43:01.285942 | orchestrator | cd79fada8ef8 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch 2025-05-28 17:43:01.285953 | orchestrator | c94cd1b15c86 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 22 minutes ago Up 22 minutes ceph-crash-testbed-node-1 2025-05-28 17:43:01.285964 | orchestrator | 5e0bff29e3df registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes keepalived 2025-05-28 17:43:01.285975 | orchestrator | 25a67598f800 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) proxysql 2025-05-28 17:43:01.285986 | orchestrator | a652d6263fbf registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) haproxy 2025-05-28 17:43:01.285997 | orchestrator | 565ea236e313 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_northd 2025-05-28 17:43:01.286014 | orchestrator | 9a68abc098b3 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_sb_db 2025-05-28 17:43:01.286098 | orchestrator | 8d604ae4b6df registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 26 minutes ago Up 25 minutes ovn_nb_db 2025-05-28 17:43:01.286110 | orchestrator | 5e8a32286032 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_controller 2025-05-28 17:43:01.286121 | orchestrator | 8281f2c8819f registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) rabbitmq 2025-05-28 17:43:01.286132 | orchestrator | 2f45b7c1f94b registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 27 minutes ago Up 27 minutes ceph-mon-testbed-node-1 2025-05-28 17:43:01.286142 | orchestrator | a2d2396cfff2 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 28 minutes ago Up 27 minutes (healthy) openvswitch_vswitchd 2025-05-28 17:43:01.286153 | orchestrator | 2f9a8f6a302a registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) openvswitch_db 2025-05-28 17:43:01.286172 | orchestrator | 43943ca30360 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) redis_sentinel 2025-05-28 17:43:01.286183 | orchestrator | 45988308a18d registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) redis 2025-05-28 17:43:01.286194 | orchestrator | f20424b8dd65 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) memcached 2025-05-28 17:43:01.286205 | orchestrator | cb853968393c registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes cron 2025-05-28 17:43:01.286215 | orchestrator | cb6dcbf3134d registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes kolla_toolbox 2025-05-28 17:43:01.286226 | orchestrator | 8c2ea26e7111 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes fluentd 2025-05-28 17:43:01.536015 | orchestrator | 2025-05-28 17:43:01.536954 | orchestrator | ## Images @ testbed-node-1 2025-05-28 17:43:01.536989 | orchestrator | 2025-05-28 17:43:01.537002 | orchestrator | + echo 2025-05-28 17:43:01.537014 | orchestrator | + echo '## Images @ testbed-node-1' 2025-05-28 17:43:01.537027 | orchestrator | + echo 2025-05-28 17:43:01.537038 | orchestrator | + osism container testbed-node-1 images 2025-05-28 17:43:03.567457 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-05-28 17:43:03.567584 | orchestrator | registry.osism.tech/osism/ceph-daemon reef d68731bce62a 14 hours ago 1.27GB 2025-05-28 17:43:03.567603 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 a84b55c0d7e2 16 hours ago 375MB 2025-05-28 17:43:03.567650 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 609b84a264a7 16 hours ago 1.59GB 2025-05-28 17:43:03.567662 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 400f3edd5387 16 hours ago 1.55GB 2025-05-28 17:43:03.567673 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 639cfade0c7e 16 hours ago 326MB 2025-05-28 17:43:03.567684 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 8def5b434a82 16 hours ago 628MB 2025-05-28 17:43:03.567695 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 c153d6307ebd 16 hours ago 318MB 2025-05-28 17:43:03.567706 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 fb20b95d5799 16 hours ago 746MB 2025-05-28 17:43:03.567716 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 52bf6df12913 16 hours ago 1.01GB 2025-05-28 17:43:03.567746 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 fa9f3ff7f637 16 hours ago 329MB 2025-05-28 17:43:03.567758 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 ee146a727a54 16 hours ago 417MB 2025-05-28 17:43:03.567769 | orchestrator | registry.osism.tech/kolla/cron 2024.2 b5a3b35ecfe2 16 hours ago 318MB 2025-05-28 17:43:03.567779 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 1b1b3245cdc9 16 hours ago 351MB 2025-05-28 17:43:03.567790 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 1a459c88c97a 16 hours ago 410MB 2025-05-28 17:43:03.567801 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 12dd06a3f982 16 hours ago 358MB 2025-05-28 17:43:03.567812 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 3bcaa1b157c1 16 hours ago 353MB 2025-05-28 17:43:03.567823 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 e22dcb22e98b 16 hours ago 344MB 2025-05-28 17:43:03.567833 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 932290e1c405 16 hours ago 361MB 2025-05-28 17:43:03.567865 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 e84962bbdc78 16 hours ago 361MB 2025-05-28 17:43:03.567877 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 db1e22cb5a65 16 hours ago 590MB 2025-05-28 17:43:03.567887 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 9292008bd508 16 hours ago 324MB 2025-05-28 17:43:03.567898 | orchestrator | registry.osism.tech/kolla/redis 2024.2 bdd8a8d80398 16 hours ago 324MB 2025-05-28 17:43:03.567909 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 086f0a82b9cd 16 hours ago 1.21GB 2025-05-28 17:43:03.567919 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 1e09d922d303 16 hours ago 946MB 2025-05-28 17:43:03.567930 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 ce955ab1e21d 16 hours ago 947MB 2025-05-28 17:43:03.567940 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 8081a327ea05 16 hours ago 946MB 2025-05-28 17:43:03.567951 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 557562e5a1f1 16 hours ago 947MB 2025-05-28 17:43:03.567962 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 4e1fa6e3e8ec 16 hours ago 1.06GB 2025-05-28 17:43:03.567974 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 e3ba3bc91014 16 hours ago 1.06GB 2025-05-28 17:43:03.567985 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 efbaa23cd1be 16 hours ago 1.06GB 2025-05-28 17:43:03.567996 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 2e59a6427b26 16 hours ago 1.1GB 2025-05-28 17:43:03.568007 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 255ac3b67147 16 hours ago 1.1GB 2025-05-28 17:43:03.568018 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 0a49bc6995a1 16 hours ago 1.1GB 2025-05-28 17:43:03.568028 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 39ea15b02c36 16 hours ago 1.12GB 2025-05-28 17:43:03.568039 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 ec8dabe7ff26 16 hours ago 1.12GB 2025-05-28 17:43:03.568050 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 009c2d17fe06 16 hours ago 1.05GB 2025-05-28 17:43:03.568102 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 f27a2d5666c7 16 hours ago 1.05GB 2025-05-28 17:43:03.568115 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 424266eb2999 16 hours ago 1.05GB 2025-05-28 17:43:03.568125 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 a6e8b2b20454 16 hours ago 1.06GB 2025-05-28 17:43:03.568136 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 ab81265c7f1b 16 hours ago 1.06GB 2025-05-28 17:43:03.568147 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 b78748c34777 16 hours ago 1.05GB 2025-05-28 17:43:03.568157 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 df7de5c04030 16 hours ago 1.42GB 2025-05-28 17:43:03.568168 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 9698dffc368f 16 hours ago 1.29GB 2025-05-28 17:43:03.568178 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 c8e51d387157 16 hours ago 1.29GB 2025-05-28 17:43:03.568189 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 946668927342 16 hours ago 1.29GB 2025-05-28 17:43:03.568199 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 cad5e686b4d0 16 hours ago 1.41GB 2025-05-28 17:43:03.568210 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 a5befc91d7f8 16 hours ago 1.41GB 2025-05-28 17:43:03.568229 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 37717431f416 16 hours ago 1.11GB 2025-05-28 17:43:03.568240 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 96f2e4852e98 16 hours ago 1.13GB 2025-05-28 17:43:03.568251 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 6c8ae8d6c78c 16 hours ago 1.11GB 2025-05-28 17:43:03.568261 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 94442865f41b 16 hours ago 1.2GB 2025-05-28 17:43:03.568272 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 c0a9896b172c 16 hours ago 1.31GB 2025-05-28 17:43:03.568283 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 4ffa94b491be 16 hours ago 1.15GB 2025-05-28 17:43:03.568293 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 eef743fc2e82 16 hours ago 1.24GB 2025-05-28 17:43:03.568304 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 26f9b2842ab8 16 hours ago 1.04GB 2025-05-28 17:43:03.799205 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-05-28 17:43:03.799414 | orchestrator | ++ semver latest 5.0.0 2025-05-28 17:43:03.847868 | orchestrator | 2025-05-28 17:43:03.847966 | orchestrator | ## Containers @ testbed-node-2 2025-05-28 17:43:03.847983 | orchestrator | 2025-05-28 17:43:03.847995 | orchestrator | + [[ -1 -eq -1 ]] 2025-05-28 17:43:03.848006 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-05-28 17:43:03.848017 | orchestrator | + echo 2025-05-28 17:43:03.848028 | orchestrator | + echo '## Containers @ testbed-node-2' 2025-05-28 17:43:03.848040 | orchestrator | + echo 2025-05-28 17:43:03.848051 | orchestrator | + osism container testbed-node-2 ps 2025-05-28 17:43:06.013912 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-05-28 17:43:06.016711 | orchestrator | 4fd425078578 registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-05-28 17:43:06.016821 | orchestrator | dcf62d5edd4b registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-05-28 17:43:06.016837 | orchestrator | ae22be293d9e registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 5 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-05-28 17:43:06.016849 | orchestrator | a69f60fe93dc registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes octavia_driver_agent 2025-05-28 17:43:06.016861 | orchestrator | 79479c367e40 registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2025-05-28 17:43:06.016874 | orchestrator | 37370f8d686a registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2025-05-28 17:43:06.016885 | orchestrator | 136cebec12b9 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_conductor 2025-05-28 17:43:06.016895 | orchestrator | 6820c2167485 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_api 2025-05-28 17:43:06.016906 | orchestrator | c4e38700ac8d registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2025-05-28 17:43:06.016917 | orchestrator | c01c28f19255 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_worker 2025-05-28 17:43:06.016927 | orchestrator | 2db82b30eb8d registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_novncproxy 2025-05-28 17:43:06.016968 | orchestrator | 7f09e2e42afd registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) neutron_server 2025-05-28 17:43:06.016979 | orchestrator | 6fa132ce508c registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_mdns 2025-05-28 17:43:06.016991 | orchestrator | c83727fe6cb2 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_producer 2025-05-28 17:43:06.017002 | orchestrator | c248d5b7ab33 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 10 minutes ago Up 9 minutes (healthy) nova_conductor 2025-05-28 17:43:06.017012 | orchestrator | 8b78aafdfa4f registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2025-05-28 17:43:06.017023 | orchestrator | 13c715f6d180 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2025-05-28 17:43:06.017033 | orchestrator | cb32addcb74e registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2025-05-28 17:43:06.017044 | orchestrator | 14fad74c904b registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2025-05-28 17:43:06.017055 | orchestrator | c4b091d7f8fb registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2025-05-28 17:43:06.017066 | orchestrator | 1ba99183a7d5 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_api 2025-05-28 17:43:06.017076 | orchestrator | be52db834280 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) nova_api 2025-05-28 17:43:06.017087 | orchestrator | 44a4acc483a9 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 12 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-05-28 17:43:06.017115 | orchestrator | 80b0be5a554c registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_elasticsearch_exporter 2025-05-28 17:43:06.017127 | orchestrator | 7e931c9c17c7 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) glance_api 2025-05-28 17:43:06.017138 | orchestrator | b0da0839c40d registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2025-05-28 17:43:06.017149 | orchestrator | 140cc40e6633 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_memcached_exporter 2025-05-28 17:43:06.017160 | orchestrator | c1e266933163 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_scheduler 2025-05-28 17:43:06.017170 | orchestrator | 7b71a2d68dce registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_api 2025-05-28 17:43:06.017189 | orchestrator | fca0bebdd510 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_mysqld_exporter 2025-05-28 17:43:06.017221 | orchestrator | f19cd8c76962 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2025-05-28 17:43:06.017241 | orchestrator | f69193e81908 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-2 2025-05-28 17:43:06.017256 | orchestrator | d1efd2ce7b4a registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone 2025-05-28 17:43:06.017267 | orchestrator | dae8ff210e3e registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) horizon 2025-05-28 17:43:06.017284 | orchestrator | 53a1469bfa77 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_fernet 2025-05-28 17:43:06.017295 | orchestrator | 96bf5075c932 registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_ssh 2025-05-28 17:43:06.017306 | orchestrator | 8b25af9540a1 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) opensearch_dashboards 2025-05-28 17:43:06.017317 | orchestrator | 39730c45eb31 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 20 minutes ago Up 20 minutes (healthy) mariadb 2025-05-28 17:43:06.017327 | orchestrator | fc523b4d2a50 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch 2025-05-28 17:43:06.017338 | orchestrator | e918159d6e3c registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 22 minutes ago Up 22 minutes ceph-crash-testbed-node-2 2025-05-28 17:43:06.017349 | orchestrator | bed8e405c5ba registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes keepalived 2025-05-28 17:43:06.017359 | orchestrator | 7133994a51b0 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) proxysql 2025-05-28 17:43:06.017370 | orchestrator | 20a175f0825c registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) haproxy 2025-05-28 17:43:06.017381 | orchestrator | 8ca7aadd48e4 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_northd 2025-05-28 17:43:06.017391 | orchestrator | 36fc5cb5941b registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 26 minutes ago Up 25 minutes ovn_sb_db 2025-05-28 17:43:06.017410 | orchestrator | 2320b29c7c0a registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 26 minutes ago Up 25 minutes ovn_nb_db 2025-05-28 17:43:06.017421 | orchestrator | 380c76eb4ca1 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_controller 2025-05-28 17:43:06.017432 | orchestrator | ec932327df8a registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) rabbitmq 2025-05-28 17:43:06.017443 | orchestrator | e23eda19bf4f registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 27 minutes ago Up 27 minutes ceph-mon-testbed-node-2 2025-05-28 17:43:06.017454 | orchestrator | bf0266f5898a registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 28 minutes ago Up 27 minutes (healthy) openvswitch_vswitchd 2025-05-28 17:43:06.017480 | orchestrator | 2784da461804 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) openvswitch_db 2025-05-28 17:43:06.017490 | orchestrator | 4d9273ec9968 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) redis_sentinel 2025-05-28 17:43:06.017501 | orchestrator | 2895b929e519 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) redis 2025-05-28 17:43:06.017512 | orchestrator | da25ea18ef9f registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) memcached 2025-05-28 17:43:06.017523 | orchestrator | 2c382e8df0d8 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes cron 2025-05-28 17:43:06.017534 | orchestrator | ff3619e37ba1 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes kolla_toolbox 2025-05-28 17:43:06.017544 | orchestrator | 35a1a577f5e1 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes fluentd 2025-05-28 17:43:06.283450 | orchestrator | 2025-05-28 17:43:06.283565 | orchestrator | ## Images @ testbed-node-2 2025-05-28 17:43:06.283580 | orchestrator | 2025-05-28 17:43:06.283592 | orchestrator | + echo 2025-05-28 17:43:06.283604 | orchestrator | + echo '## Images @ testbed-node-2' 2025-05-28 17:43:06.283617 | orchestrator | + echo 2025-05-28 17:43:06.283684 | orchestrator | + osism container testbed-node-2 images 2025-05-28 17:43:08.337055 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-05-28 17:43:08.337181 | orchestrator | registry.osism.tech/osism/ceph-daemon reef d68731bce62a 14 hours ago 1.27GB 2025-05-28 17:43:08.337196 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 a84b55c0d7e2 16 hours ago 375MB 2025-05-28 17:43:08.337208 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 609b84a264a7 16 hours ago 1.59GB 2025-05-28 17:43:08.337219 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 400f3edd5387 16 hours ago 1.55GB 2025-05-28 17:43:08.337230 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 639cfade0c7e 16 hours ago 326MB 2025-05-28 17:43:08.337241 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 8def5b434a82 16 hours ago 628MB 2025-05-28 17:43:08.337251 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 c153d6307ebd 16 hours ago 318MB 2025-05-28 17:43:08.337262 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 fb20b95d5799 16 hours ago 746MB 2025-05-28 17:43:08.337272 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 52bf6df12913 16 hours ago 1.01GB 2025-05-28 17:43:08.337283 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 fa9f3ff7f637 16 hours ago 329MB 2025-05-28 17:43:08.337293 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 ee146a727a54 16 hours ago 417MB 2025-05-28 17:43:08.337304 | orchestrator | registry.osism.tech/kolla/cron 2024.2 b5a3b35ecfe2 16 hours ago 318MB 2025-05-28 17:43:08.337314 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 1b1b3245cdc9 16 hours ago 351MB 2025-05-28 17:43:08.337325 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 1a459c88c97a 16 hours ago 410MB 2025-05-28 17:43:08.337335 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 12dd06a3f982 16 hours ago 358MB 2025-05-28 17:43:08.337346 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 3bcaa1b157c1 16 hours ago 353MB 2025-05-28 17:43:08.337388 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 e22dcb22e98b 16 hours ago 344MB 2025-05-28 17:43:08.337400 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 932290e1c405 16 hours ago 361MB 2025-05-28 17:43:08.337411 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 e84962bbdc78 16 hours ago 361MB 2025-05-28 17:43:08.337421 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 db1e22cb5a65 16 hours ago 590MB 2025-05-28 17:43:08.337432 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 9292008bd508 16 hours ago 324MB 2025-05-28 17:43:08.337442 | orchestrator | registry.osism.tech/kolla/redis 2024.2 bdd8a8d80398 16 hours ago 324MB 2025-05-28 17:43:08.337453 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 086f0a82b9cd 16 hours ago 1.21GB 2025-05-28 17:43:08.337463 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 ce955ab1e21d 16 hours ago 947MB 2025-05-28 17:43:08.337474 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 1e09d922d303 16 hours ago 946MB 2025-05-28 17:43:08.337505 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 8081a327ea05 16 hours ago 946MB 2025-05-28 17:43:08.337517 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 557562e5a1f1 16 hours ago 947MB 2025-05-28 17:43:08.337528 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 4e1fa6e3e8ec 16 hours ago 1.06GB 2025-05-28 17:43:08.337539 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 e3ba3bc91014 16 hours ago 1.06GB 2025-05-28 17:43:08.337549 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 efbaa23cd1be 16 hours ago 1.06GB 2025-05-28 17:43:08.337560 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 2e59a6427b26 16 hours ago 1.1GB 2025-05-28 17:43:08.337571 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 255ac3b67147 16 hours ago 1.1GB 2025-05-28 17:43:08.337584 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 0a49bc6995a1 16 hours ago 1.1GB 2025-05-28 17:43:08.337596 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 39ea15b02c36 16 hours ago 1.12GB 2025-05-28 17:43:08.337608 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 ec8dabe7ff26 16 hours ago 1.12GB 2025-05-28 17:43:08.337654 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 009c2d17fe06 16 hours ago 1.05GB 2025-05-28 17:43:08.337687 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 f27a2d5666c7 16 hours ago 1.05GB 2025-05-28 17:43:08.337700 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 424266eb2999 16 hours ago 1.05GB 2025-05-28 17:43:08.337713 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 a6e8b2b20454 16 hours ago 1.06GB 2025-05-28 17:43:08.337725 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 ab81265c7f1b 16 hours ago 1.06GB 2025-05-28 17:43:08.337737 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 b78748c34777 16 hours ago 1.05GB 2025-05-28 17:43:08.337750 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 df7de5c04030 16 hours ago 1.42GB 2025-05-28 17:43:08.337762 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 9698dffc368f 16 hours ago 1.29GB 2025-05-28 17:43:08.337780 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 c8e51d387157 16 hours ago 1.29GB 2025-05-28 17:43:08.337799 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 946668927342 16 hours ago 1.29GB 2025-05-28 17:43:08.337819 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 cad5e686b4d0 16 hours ago 1.41GB 2025-05-28 17:43:08.337848 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 a5befc91d7f8 16 hours ago 1.41GB 2025-05-28 17:43:08.337869 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 37717431f416 16 hours ago 1.11GB 2025-05-28 17:43:08.337890 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 96f2e4852e98 16 hours ago 1.13GB 2025-05-28 17:43:08.337909 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 6c8ae8d6c78c 16 hours ago 1.11GB 2025-05-28 17:43:08.337921 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 94442865f41b 16 hours ago 1.2GB 2025-05-28 17:43:08.337932 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 c0a9896b172c 16 hours ago 1.31GB 2025-05-28 17:43:08.337943 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 4ffa94b491be 16 hours ago 1.15GB 2025-05-28 17:43:08.337953 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 eef743fc2e82 16 hours ago 1.24GB 2025-05-28 17:43:08.337964 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 26f9b2842ab8 16 hours ago 1.04GB 2025-05-28 17:43:08.602960 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2025-05-28 17:43:08.608317 | orchestrator | + set -e 2025-05-28 17:43:08.608349 | orchestrator | + source /opt/manager-vars.sh 2025-05-28 17:43:08.609267 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-28 17:43:08.609294 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-28 17:43:08.609306 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-28 17:43:08.609317 | orchestrator | ++ CEPH_VERSION=reef 2025-05-28 17:43:08.609334 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-28 17:43:08.609346 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-28 17:43:08.609358 | orchestrator | ++ export MANAGER_VERSION=latest 2025-05-28 17:43:08.609370 | orchestrator | ++ MANAGER_VERSION=latest 2025-05-28 17:43:08.609381 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-28 17:43:08.609391 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-28 17:43:08.609402 | orchestrator | ++ export ARA=false 2025-05-28 17:43:08.609413 | orchestrator | ++ ARA=false 2025-05-28 17:43:08.609423 | orchestrator | ++ export TEMPEST=false 2025-05-28 17:43:08.609434 | orchestrator | ++ TEMPEST=false 2025-05-28 17:43:08.609444 | orchestrator | ++ export IS_ZUUL=true 2025-05-28 17:43:08.609455 | orchestrator | ++ IS_ZUUL=true 2025-05-28 17:43:08.609466 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.180 2025-05-28 17:43:08.609476 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.180 2025-05-28 17:43:08.609487 | orchestrator | ++ export EXTERNAL_API=false 2025-05-28 17:43:08.609497 | orchestrator | ++ EXTERNAL_API=false 2025-05-28 17:43:08.609508 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-28 17:43:08.609518 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-28 17:43:08.609529 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-28 17:43:08.609539 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-28 17:43:08.609550 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-28 17:43:08.609560 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-28 17:43:08.609571 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-05-28 17:43:08.609582 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2025-05-28 17:43:08.619726 | orchestrator | + set -e 2025-05-28 17:43:08.619817 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-28 17:43:08.619834 | orchestrator | ++ export INTERACTIVE=false 2025-05-28 17:43:08.619847 | orchestrator | ++ INTERACTIVE=false 2025-05-28 17:43:08.619857 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-28 17:43:08.619868 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-28 17:43:08.619879 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-05-28 17:43:08.620830 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-05-28 17:43:08.627070 | orchestrator | 2025-05-28 17:43:08.627106 | orchestrator | # Ceph status 2025-05-28 17:43:08.627118 | orchestrator | 2025-05-28 17:43:08.627129 | orchestrator | ++ export MANAGER_VERSION=latest 2025-05-28 17:43:08.627140 | orchestrator | ++ MANAGER_VERSION=latest 2025-05-28 17:43:08.627151 | orchestrator | + echo 2025-05-28 17:43:08.627162 | orchestrator | + echo '# Ceph status' 2025-05-28 17:43:08.627173 | orchestrator | + echo 2025-05-28 17:43:08.627184 | orchestrator | + ceph -s 2025-05-28 17:43:09.199816 | orchestrator | cluster: 2025-05-28 17:43:09.199956 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2025-05-28 17:43:09.199984 | orchestrator | health: HEALTH_OK 2025-05-28 17:43:09.200003 | orchestrator | 2025-05-28 17:43:09.200021 | orchestrator | services: 2025-05-28 17:43:09.200038 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 27m) 2025-05-28 17:43:09.200058 | orchestrator | mgr: testbed-node-2(active, since 15m), standbys: testbed-node-0, testbed-node-1 2025-05-28 17:43:09.200077 | orchestrator | mds: 1/1 daemons up, 2 standby 2025-05-28 17:43:09.200094 | orchestrator | osd: 6 osds: 6 up (since 23m), 6 in (since 24m) 2025-05-28 17:43:09.200110 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2025-05-28 17:43:09.200126 | orchestrator | 2025-05-28 17:43:09.200144 | orchestrator | data: 2025-05-28 17:43:09.200162 | orchestrator | volumes: 1/1 healthy 2025-05-28 17:43:09.200181 | orchestrator | pools: 14 pools, 401 pgs 2025-05-28 17:43:09.200199 | orchestrator | objects: 524 objects, 2.2 GiB 2025-05-28 17:43:09.200217 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2025-05-28 17:43:09.200236 | orchestrator | pgs: 401 active+clean 2025-05-28 17:43:09.200255 | orchestrator | 2025-05-28 17:43:09.253735 | orchestrator | 2025-05-28 17:43:09.253841 | orchestrator | # Ceph versions 2025-05-28 17:43:09.253855 | orchestrator | 2025-05-28 17:43:09.253866 | orchestrator | + echo 2025-05-28 17:43:09.253878 | orchestrator | + echo '# Ceph versions' 2025-05-28 17:43:09.253891 | orchestrator | + echo 2025-05-28 17:43:09.253902 | orchestrator | + ceph versions 2025-05-28 17:43:09.823150 | orchestrator | { 2025-05-28 17:43:09.823276 | orchestrator | "mon": { 2025-05-28 17:43:09.823293 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-05-28 17:43:09.823306 | orchestrator | }, 2025-05-28 17:43:09.823318 | orchestrator | "mgr": { 2025-05-28 17:43:09.823329 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-05-28 17:43:09.823339 | orchestrator | }, 2025-05-28 17:43:09.823351 | orchestrator | "osd": { 2025-05-28 17:43:09.823361 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2025-05-28 17:43:09.823372 | orchestrator | }, 2025-05-28 17:43:09.823383 | orchestrator | "mds": { 2025-05-28 17:43:09.823393 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-05-28 17:43:09.823403 | orchestrator | }, 2025-05-28 17:43:09.823414 | orchestrator | "rgw": { 2025-05-28 17:43:09.823424 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-05-28 17:43:09.823435 | orchestrator | }, 2025-05-28 17:43:09.823446 | orchestrator | "overall": { 2025-05-28 17:43:09.823457 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2025-05-28 17:43:09.823468 | orchestrator | } 2025-05-28 17:43:09.823479 | orchestrator | } 2025-05-28 17:43:09.878587 | orchestrator | 2025-05-28 17:43:09.878725 | orchestrator | # Ceph OSD tree 2025-05-28 17:43:09.878739 | orchestrator | 2025-05-28 17:43:09.878751 | orchestrator | + echo 2025-05-28 17:43:09.878762 | orchestrator | + echo '# Ceph OSD tree' 2025-05-28 17:43:09.878774 | orchestrator | + echo 2025-05-28 17:43:09.878785 | orchestrator | + ceph osd df tree 2025-05-28 17:43:10.383111 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2025-05-28 17:43:10.383249 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 430 MiB 113 GiB 5.92 1.00 - root default 2025-05-28 17:43:10.383263 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.91 1.00 - host testbed-node-3 2025-05-28 17:43:10.383274 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.1 GiB 1 KiB 70 MiB 19 GiB 5.71 0.97 189 up osd.0 2025-05-28 17:43:10.383284 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.2 GiB 1 KiB 74 MiB 19 GiB 6.12 1.03 201 up osd.3 2025-05-28 17:43:10.383295 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-4 2025-05-28 17:43:10.383306 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.0 GiB 1 KiB 70 MiB 19 GiB 5.56 0.94 195 up osd.1 2025-05-28 17:43:10.383317 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 74 MiB 19 GiB 6.28 1.06 197 up osd.5 2025-05-28 17:43:10.383359 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-5 2025-05-28 17:43:10.383371 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 70 MiB 19 GiB 6.53 1.10 198 up osd.2 2025-05-28 17:43:10.383381 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1011 MiB 1 KiB 74 MiB 19 GiB 5.30 0.90 190 up osd.4 2025-05-28 17:43:10.383392 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 430 MiB 113 GiB 5.92 2025-05-28 17:43:10.383403 | orchestrator | MIN/MAX VAR: 0.90/1.10 STDDEV: 0.43 2025-05-28 17:43:10.429420 | orchestrator | 2025-05-28 17:43:10.429507 | orchestrator | # Ceph monitor status 2025-05-28 17:43:10.429517 | orchestrator | 2025-05-28 17:43:10.429524 | orchestrator | + echo 2025-05-28 17:43:10.429531 | orchestrator | + echo '# Ceph monitor status' 2025-05-28 17:43:10.429538 | orchestrator | + echo 2025-05-28 17:43:10.429545 | orchestrator | + ceph mon stat 2025-05-28 17:43:10.984858 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 8, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2025-05-28 17:43:11.031782 | orchestrator | 2025-05-28 17:43:11.031894 | orchestrator | # Ceph quorum status 2025-05-28 17:43:11.031910 | orchestrator | 2025-05-28 17:43:11.031923 | orchestrator | + echo 2025-05-28 17:43:11.031935 | orchestrator | + echo '# Ceph quorum status' 2025-05-28 17:43:11.031947 | orchestrator | + echo 2025-05-28 17:43:11.032301 | orchestrator | + ceph quorum_status 2025-05-28 17:43:11.032326 | orchestrator | + jq 2025-05-28 17:43:11.665719 | orchestrator | { 2025-05-28 17:43:11.665823 | orchestrator | "election_epoch": 8, 2025-05-28 17:43:11.665838 | orchestrator | "quorum": [ 2025-05-28 17:43:11.665850 | orchestrator | 0, 2025-05-28 17:43:11.665861 | orchestrator | 1, 2025-05-28 17:43:11.665872 | orchestrator | 2 2025-05-28 17:43:11.665883 | orchestrator | ], 2025-05-28 17:43:11.665893 | orchestrator | "quorum_names": [ 2025-05-28 17:43:11.665904 | orchestrator | "testbed-node-0", 2025-05-28 17:43:11.665915 | orchestrator | "testbed-node-1", 2025-05-28 17:43:11.665925 | orchestrator | "testbed-node-2" 2025-05-28 17:43:11.665936 | orchestrator | ], 2025-05-28 17:43:11.665947 | orchestrator | "quorum_leader_name": "testbed-node-0", 2025-05-28 17:43:11.665959 | orchestrator | "quorum_age": 1636, 2025-05-28 17:43:11.665969 | orchestrator | "features": { 2025-05-28 17:43:11.665980 | orchestrator | "quorum_con": "4540138322906710015", 2025-05-28 17:43:11.665991 | orchestrator | "quorum_mon": [ 2025-05-28 17:43:11.666002 | orchestrator | "kraken", 2025-05-28 17:43:11.666013 | orchestrator | "luminous", 2025-05-28 17:43:11.666083 | orchestrator | "mimic", 2025-05-28 17:43:11.666095 | orchestrator | "osdmap-prune", 2025-05-28 17:43:11.666105 | orchestrator | "nautilus", 2025-05-28 17:43:11.666116 | orchestrator | "octopus", 2025-05-28 17:43:11.666127 | orchestrator | "pacific", 2025-05-28 17:43:11.666137 | orchestrator | "elector-pinging", 2025-05-28 17:43:11.666147 | orchestrator | "quincy", 2025-05-28 17:43:11.666158 | orchestrator | "reef" 2025-05-28 17:43:11.666169 | orchestrator | ] 2025-05-28 17:43:11.666181 | orchestrator | }, 2025-05-28 17:43:11.666192 | orchestrator | "monmap": { 2025-05-28 17:43:11.666204 | orchestrator | "epoch": 1, 2025-05-28 17:43:11.666216 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2025-05-28 17:43:11.666229 | orchestrator | "modified": "2025-05-28T17:15:38.745132Z", 2025-05-28 17:43:11.666241 | orchestrator | "created": "2025-05-28T17:15:38.745132Z", 2025-05-28 17:43:11.666252 | orchestrator | "min_mon_release": 18, 2025-05-28 17:43:11.666264 | orchestrator | "min_mon_release_name": "reef", 2025-05-28 17:43:11.666276 | orchestrator | "election_strategy": 1, 2025-05-28 17:43:11.666288 | orchestrator | "disallowed_leaders: ": "", 2025-05-28 17:43:11.666300 | orchestrator | "stretch_mode": false, 2025-05-28 17:43:11.666311 | orchestrator | "tiebreaker_mon": "", 2025-05-28 17:43:11.666323 | orchestrator | "removed_ranks: ": "", 2025-05-28 17:43:11.666334 | orchestrator | "features": { 2025-05-28 17:43:11.666346 | orchestrator | "persistent": [ 2025-05-28 17:43:11.666357 | orchestrator | "kraken", 2025-05-28 17:43:11.666369 | orchestrator | "luminous", 2025-05-28 17:43:11.666381 | orchestrator | "mimic", 2025-05-28 17:43:11.666423 | orchestrator | "osdmap-prune", 2025-05-28 17:43:11.666435 | orchestrator | "nautilus", 2025-05-28 17:43:11.666448 | orchestrator | "octopus", 2025-05-28 17:43:11.666460 | orchestrator | "pacific", 2025-05-28 17:43:11.666471 | orchestrator | "elector-pinging", 2025-05-28 17:43:11.666483 | orchestrator | "quincy", 2025-05-28 17:43:11.666494 | orchestrator | "reef" 2025-05-28 17:43:11.666507 | orchestrator | ], 2025-05-28 17:43:11.666519 | orchestrator | "optional": [] 2025-05-28 17:43:11.666530 | orchestrator | }, 2025-05-28 17:43:11.666542 | orchestrator | "mons": [ 2025-05-28 17:43:11.666552 | orchestrator | { 2025-05-28 17:43:11.666578 | orchestrator | "rank": 0, 2025-05-28 17:43:11.666589 | orchestrator | "name": "testbed-node-0", 2025-05-28 17:43:11.666599 | orchestrator | "public_addrs": { 2025-05-28 17:43:11.666610 | orchestrator | "addrvec": [ 2025-05-28 17:43:11.666620 | orchestrator | { 2025-05-28 17:43:11.666630 | orchestrator | "type": "v2", 2025-05-28 17:43:11.666660 | orchestrator | "addr": "192.168.16.10:3300", 2025-05-28 17:43:11.666671 | orchestrator | "nonce": 0 2025-05-28 17:43:11.666682 | orchestrator | }, 2025-05-28 17:43:11.666693 | orchestrator | { 2025-05-28 17:43:11.666703 | orchestrator | "type": "v1", 2025-05-28 17:43:11.666714 | orchestrator | "addr": "192.168.16.10:6789", 2025-05-28 17:43:11.666724 | orchestrator | "nonce": 0 2025-05-28 17:43:11.666734 | orchestrator | } 2025-05-28 17:43:11.666745 | orchestrator | ] 2025-05-28 17:43:11.666755 | orchestrator | }, 2025-05-28 17:43:11.666766 | orchestrator | "addr": "192.168.16.10:6789/0", 2025-05-28 17:43:11.666776 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2025-05-28 17:43:11.666787 | orchestrator | "priority": 0, 2025-05-28 17:43:11.666797 | orchestrator | "weight": 0, 2025-05-28 17:43:11.666807 | orchestrator | "crush_location": "{}" 2025-05-28 17:43:11.666818 | orchestrator | }, 2025-05-28 17:43:11.666828 | orchestrator | { 2025-05-28 17:43:11.666838 | orchestrator | "rank": 1, 2025-05-28 17:43:11.666849 | orchestrator | "name": "testbed-node-1", 2025-05-28 17:43:11.666859 | orchestrator | "public_addrs": { 2025-05-28 17:43:11.666870 | orchestrator | "addrvec": [ 2025-05-28 17:43:11.666880 | orchestrator | { 2025-05-28 17:43:11.666890 | orchestrator | "type": "v2", 2025-05-28 17:43:11.666901 | orchestrator | "addr": "192.168.16.11:3300", 2025-05-28 17:43:11.666911 | orchestrator | "nonce": 0 2025-05-28 17:43:11.666922 | orchestrator | }, 2025-05-28 17:43:11.666932 | orchestrator | { 2025-05-28 17:43:11.666943 | orchestrator | "type": "v1", 2025-05-28 17:43:11.666953 | orchestrator | "addr": "192.168.16.11:6789", 2025-05-28 17:43:11.666964 | orchestrator | "nonce": 0 2025-05-28 17:43:11.666974 | orchestrator | } 2025-05-28 17:43:11.666985 | orchestrator | ] 2025-05-28 17:43:11.666995 | orchestrator | }, 2025-05-28 17:43:11.667005 | orchestrator | "addr": "192.168.16.11:6789/0", 2025-05-28 17:43:11.667016 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2025-05-28 17:43:11.667026 | orchestrator | "priority": 0, 2025-05-28 17:43:11.667037 | orchestrator | "weight": 0, 2025-05-28 17:43:11.667047 | orchestrator | "crush_location": "{}" 2025-05-28 17:43:11.667058 | orchestrator | }, 2025-05-28 17:43:11.667068 | orchestrator | { 2025-05-28 17:43:11.667078 | orchestrator | "rank": 2, 2025-05-28 17:43:11.667089 | orchestrator | "name": "testbed-node-2", 2025-05-28 17:43:11.667099 | orchestrator | "public_addrs": { 2025-05-28 17:43:11.667110 | orchestrator | "addrvec": [ 2025-05-28 17:43:11.667120 | orchestrator | { 2025-05-28 17:43:11.667131 | orchestrator | "type": "v2", 2025-05-28 17:43:11.667141 | orchestrator | "addr": "192.168.16.12:3300", 2025-05-28 17:43:11.667152 | orchestrator | "nonce": 0 2025-05-28 17:43:11.667162 | orchestrator | }, 2025-05-28 17:43:11.667173 | orchestrator | { 2025-05-28 17:43:11.667183 | orchestrator | "type": "v1", 2025-05-28 17:43:11.667194 | orchestrator | "addr": "192.168.16.12:6789", 2025-05-28 17:43:11.667204 | orchestrator | "nonce": 0 2025-05-28 17:43:11.667215 | orchestrator | } 2025-05-28 17:43:11.667225 | orchestrator | ] 2025-05-28 17:43:11.667235 | orchestrator | }, 2025-05-28 17:43:11.667246 | orchestrator | "addr": "192.168.16.12:6789/0", 2025-05-28 17:43:11.667256 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2025-05-28 17:43:11.667267 | orchestrator | "priority": 0, 2025-05-28 17:43:11.667277 | orchestrator | "weight": 0, 2025-05-28 17:43:11.667294 | orchestrator | "crush_location": "{}" 2025-05-28 17:43:11.667305 | orchestrator | } 2025-05-28 17:43:11.667316 | orchestrator | ] 2025-05-28 17:43:11.667326 | orchestrator | } 2025-05-28 17:43:11.667337 | orchestrator | } 2025-05-28 17:43:11.667347 | orchestrator | 2025-05-28 17:43:11.667358 | orchestrator | # Ceph free space status 2025-05-28 17:43:11.667369 | orchestrator | 2025-05-28 17:43:11.667380 | orchestrator | + echo 2025-05-28 17:43:11.667391 | orchestrator | + echo '# Ceph free space status' 2025-05-28 17:43:11.667402 | orchestrator | + echo 2025-05-28 17:43:11.667412 | orchestrator | + ceph df 2025-05-28 17:43:12.260715 | orchestrator | --- RAW STORAGE --- 2025-05-28 17:43:12.260804 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2025-05-28 17:43:12.260828 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-05-28 17:43:12.260838 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-05-28 17:43:12.260848 | orchestrator | 2025-05-28 17:43:12.260858 | orchestrator | --- POOLS --- 2025-05-28 17:43:12.260869 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2025-05-28 17:43:12.260879 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2025-05-28 17:43:12.260888 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2025-05-28 17:43:12.260898 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2025-05-28 17:43:12.260907 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2025-05-28 17:43:12.260917 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2025-05-28 17:43:12.260926 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2025-05-28 17:43:12.260935 | orchestrator | default.rgw.log 7 32 3.6 KiB 177 408 KiB 0 35 GiB 2025-05-28 17:43:12.260945 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2025-05-28 17:43:12.260954 | orchestrator | .rgw.root 9 32 3.9 KiB 8 64 KiB 0 53 GiB 2025-05-28 17:43:12.260963 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2025-05-28 17:43:12.260973 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2025-05-28 17:43:12.260982 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.91 35 GiB 2025-05-28 17:43:12.260991 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2025-05-28 17:43:12.261000 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2025-05-28 17:43:12.305346 | orchestrator | ++ semver latest 5.0.0 2025-05-28 17:43:12.354382 | orchestrator | + [[ -1 -eq -1 ]] 2025-05-28 17:43:12.354452 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-05-28 17:43:12.354464 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2025-05-28 17:43:12.354474 | orchestrator | + osism apply facts 2025-05-28 17:43:14.033164 | orchestrator | Registering Redlock._acquired_script 2025-05-28 17:43:14.033267 | orchestrator | Registering Redlock._extend_script 2025-05-28 17:43:14.033285 | orchestrator | Registering Redlock._release_script 2025-05-28 17:43:14.090500 | orchestrator | 2025-05-28 17:43:14 | INFO  | Task 4e366ef6-794d-46f8-a060-97a1386b144c (facts) was prepared for execution. 2025-05-28 17:43:14.090585 | orchestrator | 2025-05-28 17:43:14 | INFO  | It takes a moment until task 4e366ef6-794d-46f8-a060-97a1386b144c (facts) has been started and output is visible here. 2025-05-28 17:43:18.100373 | orchestrator | 2025-05-28 17:43:18.100813 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-05-28 17:43:18.102111 | orchestrator | 2025-05-28 17:43:18.103900 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-05-28 17:43:18.104779 | orchestrator | Wednesday 28 May 2025 17:43:18 +0000 (0:00:00.264) 0:00:00.264 ********* 2025-05-28 17:43:18.733622 | orchestrator | ok: [testbed-manager] 2025-05-28 17:43:19.561697 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:43:19.561805 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:43:19.562082 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:43:19.563159 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:43:19.563865 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:43:19.564531 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:43:19.565231 | orchestrator | 2025-05-28 17:43:19.565692 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-05-28 17:43:19.566180 | orchestrator | Wednesday 28 May 2025 17:43:19 +0000 (0:00:01.458) 0:00:01.722 ********* 2025-05-28 17:43:19.740283 | orchestrator | skipping: [testbed-manager] 2025-05-28 17:43:19.829928 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:43:19.920832 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:43:20.029389 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:43:20.109968 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:43:20.876633 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:43:20.877059 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:43:20.877308 | orchestrator | 2025-05-28 17:43:20.880165 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-28 17:43:20.880466 | orchestrator | 2025-05-28 17:43:20.881474 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-28 17:43:20.881748 | orchestrator | Wednesday 28 May 2025 17:43:20 +0000 (0:00:01.321) 0:00:03.044 ********* 2025-05-28 17:43:26.079435 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:43:26.079573 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:43:26.079891 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:43:26.080388 | orchestrator | ok: [testbed-manager] 2025-05-28 17:43:26.080916 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:43:26.081627 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:43:26.081669 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:43:26.082093 | orchestrator | 2025-05-28 17:43:26.084876 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-05-28 17:43:26.085029 | orchestrator | 2025-05-28 17:43:26.085586 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-05-28 17:43:26.088281 | orchestrator | Wednesday 28 May 2025 17:43:26 +0000 (0:00:05.202) 0:00:08.246 ********* 2025-05-28 17:43:26.268108 | orchestrator | skipping: [testbed-manager] 2025-05-28 17:43:26.348182 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:43:26.429301 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:43:26.507613 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:43:26.590236 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:43:26.636084 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:43:26.639487 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:43:26.641385 | orchestrator | 2025-05-28 17:43:26.642407 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 17:43:26.643094 | orchestrator | 2025-05-28 17:43:26 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-28 17:43:26.643391 | orchestrator | 2025-05-28 17:43:26 | INFO  | Please wait and do not abort execution. 2025-05-28 17:43:26.645219 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 17:43:26.646122 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 17:43:26.647145 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 17:43:26.647943 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 17:43:26.648652 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 17:43:26.649812 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 17:43:26.650306 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 17:43:26.651332 | orchestrator | 2025-05-28 17:43:26.651827 | orchestrator | 2025-05-28 17:43:26.652335 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 17:43:26.652975 | orchestrator | Wednesday 28 May 2025 17:43:26 +0000 (0:00:00.558) 0:00:08.805 ********* 2025-05-28 17:43:26.653535 | orchestrator | =============================================================================== 2025-05-28 17:43:26.654113 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.20s 2025-05-28 17:43:26.655008 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.46s 2025-05-28 17:43:26.655636 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.32s 2025-05-28 17:43:26.656291 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.56s 2025-05-28 17:43:27.287486 | orchestrator | + osism validate ceph-mons 2025-05-28 17:43:28.923053 | orchestrator | Registering Redlock._acquired_script 2025-05-28 17:43:28.923179 | orchestrator | Registering Redlock._extend_script 2025-05-28 17:43:28.923194 | orchestrator | Registering Redlock._release_script 2025-05-28 17:43:48.292367 | orchestrator | 2025-05-28 17:43:48.292499 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2025-05-28 17:43:48.292517 | orchestrator | 2025-05-28 17:43:48.292529 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-05-28 17:43:48.292541 | orchestrator | Wednesday 28 May 2025 17:43:33 +0000 (0:00:00.420) 0:00:00.420 ********* 2025-05-28 17:43:48.292552 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-28 17:43:48.292563 | orchestrator | 2025-05-28 17:43:48.292574 | orchestrator | TASK [Create report output directory] ****************************************** 2025-05-28 17:43:48.292584 | orchestrator | Wednesday 28 May 2025 17:43:33 +0000 (0:00:00.620) 0:00:01.041 ********* 2025-05-28 17:43:48.292596 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-28 17:43:48.292607 | orchestrator | 2025-05-28 17:43:48.292636 | orchestrator | TASK [Define report vars] ****************************************************** 2025-05-28 17:43:48.292647 | orchestrator | Wednesday 28 May 2025 17:43:34 +0000 (0:00:00.799) 0:00:01.840 ********* 2025-05-28 17:43:48.292658 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:43:48.292670 | orchestrator | 2025-05-28 17:43:48.292681 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-05-28 17:43:48.292692 | orchestrator | Wednesday 28 May 2025 17:43:34 +0000 (0:00:00.251) 0:00:02.091 ********* 2025-05-28 17:43:48.292703 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:43:48.292714 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:43:48.292724 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:43:48.292794 | orchestrator | 2025-05-28 17:43:48.292806 | orchestrator | TASK [Get container info] ****************************************************** 2025-05-28 17:43:48.292817 | orchestrator | Wednesday 28 May 2025 17:43:35 +0000 (0:00:00.296) 0:00:02.388 ********* 2025-05-28 17:43:48.292828 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:43:48.292838 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:43:48.292849 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:43:48.292860 | orchestrator | 2025-05-28 17:43:48.292870 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-05-28 17:43:48.292882 | orchestrator | Wednesday 28 May 2025 17:43:36 +0000 (0:00:01.005) 0:00:03.394 ********* 2025-05-28 17:43:48.292895 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:43:48.292908 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:43:48.292920 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:43:48.292932 | orchestrator | 2025-05-28 17:43:48.292945 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-05-28 17:43:48.292957 | orchestrator | Wednesday 28 May 2025 17:43:36 +0000 (0:00:00.311) 0:00:03.705 ********* 2025-05-28 17:43:48.292969 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:43:48.292981 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:43:48.292993 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:43:48.293004 | orchestrator | 2025-05-28 17:43:48.293016 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-05-28 17:43:48.293050 | orchestrator | Wednesday 28 May 2025 17:43:36 +0000 (0:00:00.478) 0:00:04.184 ********* 2025-05-28 17:43:48.293062 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:43:48.293074 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:43:48.293086 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:43:48.293098 | orchestrator | 2025-05-28 17:43:48.293110 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2025-05-28 17:43:48.293122 | orchestrator | Wednesday 28 May 2025 17:43:37 +0000 (0:00:00.313) 0:00:04.498 ********* 2025-05-28 17:43:48.293134 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:43:48.293146 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:43:48.293158 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:43:48.293170 | orchestrator | 2025-05-28 17:43:48.293182 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2025-05-28 17:43:48.293195 | orchestrator | Wednesday 28 May 2025 17:43:37 +0000 (0:00:00.300) 0:00:04.798 ********* 2025-05-28 17:43:48.293207 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:43:48.293219 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:43:48.293231 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:43:48.293243 | orchestrator | 2025-05-28 17:43:48.293255 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-05-28 17:43:48.293267 | orchestrator | Wednesday 28 May 2025 17:43:37 +0000 (0:00:00.289) 0:00:05.087 ********* 2025-05-28 17:43:48.293279 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:43:48.293291 | orchestrator | 2025-05-28 17:43:48.293301 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-05-28 17:43:48.293312 | orchestrator | Wednesday 28 May 2025 17:43:38 +0000 (0:00:00.633) 0:00:05.721 ********* 2025-05-28 17:43:48.293323 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:43:48.293334 | orchestrator | 2025-05-28 17:43:48.293345 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-05-28 17:43:48.293355 | orchestrator | Wednesday 28 May 2025 17:43:38 +0000 (0:00:00.251) 0:00:05.972 ********* 2025-05-28 17:43:48.293366 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:43:48.293377 | orchestrator | 2025-05-28 17:43:48.293387 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-28 17:43:48.293398 | orchestrator | Wednesday 28 May 2025 17:43:38 +0000 (0:00:00.249) 0:00:06.222 ********* 2025-05-28 17:43:48.293409 | orchestrator | 2025-05-28 17:43:48.293420 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-28 17:43:48.293430 | orchestrator | Wednesday 28 May 2025 17:43:39 +0000 (0:00:00.068) 0:00:06.290 ********* 2025-05-28 17:43:48.293441 | orchestrator | 2025-05-28 17:43:48.293451 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-28 17:43:48.293462 | orchestrator | Wednesday 28 May 2025 17:43:39 +0000 (0:00:00.070) 0:00:06.361 ********* 2025-05-28 17:43:48.293473 | orchestrator | 2025-05-28 17:43:48.293484 | orchestrator | TASK [Print report file information] ******************************************* 2025-05-28 17:43:48.293494 | orchestrator | Wednesday 28 May 2025 17:43:39 +0000 (0:00:00.071) 0:00:06.433 ********* 2025-05-28 17:43:48.293505 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:43:48.293516 | orchestrator | 2025-05-28 17:43:48.293526 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-05-28 17:43:48.293537 | orchestrator | Wednesday 28 May 2025 17:43:39 +0000 (0:00:00.250) 0:00:06.684 ********* 2025-05-28 17:43:48.293548 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:43:48.293559 | orchestrator | 2025-05-28 17:43:48.293588 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2025-05-28 17:43:48.293600 | orchestrator | Wednesday 28 May 2025 17:43:39 +0000 (0:00:00.233) 0:00:06.917 ********* 2025-05-28 17:43:48.293611 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:43:48.293622 | orchestrator | 2025-05-28 17:43:48.293632 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2025-05-28 17:43:48.293643 | orchestrator | Wednesday 28 May 2025 17:43:39 +0000 (0:00:00.111) 0:00:07.028 ********* 2025-05-28 17:43:48.293662 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:43:48.293672 | orchestrator | 2025-05-28 17:43:48.293683 | orchestrator | TASK [Set quorum test data] **************************************************** 2025-05-28 17:43:48.293693 | orchestrator | Wednesday 28 May 2025 17:43:41 +0000 (0:00:01.552) 0:00:08.581 ********* 2025-05-28 17:43:48.293711 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:43:48.293722 | orchestrator | 2025-05-28 17:43:48.293755 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2025-05-28 17:43:48.293766 | orchestrator | Wednesday 28 May 2025 17:43:41 +0000 (0:00:00.351) 0:00:08.932 ********* 2025-05-28 17:43:48.293777 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:43:48.293788 | orchestrator | 2025-05-28 17:43:48.293799 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2025-05-28 17:43:48.293809 | orchestrator | Wednesday 28 May 2025 17:43:41 +0000 (0:00:00.325) 0:00:09.257 ********* 2025-05-28 17:43:48.293820 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:43:48.293830 | orchestrator | 2025-05-28 17:43:48.293841 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2025-05-28 17:43:48.293852 | orchestrator | Wednesday 28 May 2025 17:43:42 +0000 (0:00:00.331) 0:00:09.589 ********* 2025-05-28 17:43:48.293862 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:43:48.293873 | orchestrator | 2025-05-28 17:43:48.293883 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2025-05-28 17:43:48.293894 | orchestrator | Wednesday 28 May 2025 17:43:42 +0000 (0:00:00.319) 0:00:09.908 ********* 2025-05-28 17:43:48.293905 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:43:48.293915 | orchestrator | 2025-05-28 17:43:48.293926 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2025-05-28 17:43:48.293937 | orchestrator | Wednesday 28 May 2025 17:43:42 +0000 (0:00:00.122) 0:00:10.031 ********* 2025-05-28 17:43:48.293947 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:43:48.293958 | orchestrator | 2025-05-28 17:43:48.293969 | orchestrator | TASK [Prepare status test vars] ************************************************ 2025-05-28 17:43:48.293979 | orchestrator | Wednesday 28 May 2025 17:43:42 +0000 (0:00:00.119) 0:00:10.151 ********* 2025-05-28 17:43:48.293990 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:43:48.294000 | orchestrator | 2025-05-28 17:43:48.294011 | orchestrator | TASK [Gather status data] ****************************************************** 2025-05-28 17:43:48.294089 | orchestrator | Wednesday 28 May 2025 17:43:42 +0000 (0:00:00.114) 0:00:10.265 ********* 2025-05-28 17:43:48.294101 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:43:48.294111 | orchestrator | 2025-05-28 17:43:48.294122 | orchestrator | TASK [Set health test data] **************************************************** 2025-05-28 17:43:48.294133 | orchestrator | Wednesday 28 May 2025 17:43:44 +0000 (0:00:01.327) 0:00:11.593 ********* 2025-05-28 17:43:48.294143 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:43:48.294154 | orchestrator | 2025-05-28 17:43:48.294165 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2025-05-28 17:43:48.294175 | orchestrator | Wednesday 28 May 2025 17:43:44 +0000 (0:00:00.286) 0:00:11.879 ********* 2025-05-28 17:43:48.294186 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:43:48.294197 | orchestrator | 2025-05-28 17:43:48.294208 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2025-05-28 17:43:48.294218 | orchestrator | Wednesday 28 May 2025 17:43:44 +0000 (0:00:00.151) 0:00:12.030 ********* 2025-05-28 17:43:48.294229 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:43:48.294239 | orchestrator | 2025-05-28 17:43:48.294250 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2025-05-28 17:43:48.294261 | orchestrator | Wednesday 28 May 2025 17:43:44 +0000 (0:00:00.145) 0:00:12.176 ********* 2025-05-28 17:43:48.294271 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:43:48.294282 | orchestrator | 2025-05-28 17:43:48.294292 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2025-05-28 17:43:48.294303 | orchestrator | Wednesday 28 May 2025 17:43:45 +0000 (0:00:00.139) 0:00:12.316 ********* 2025-05-28 17:43:48.294375 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:43:48.294386 | orchestrator | 2025-05-28 17:43:48.294396 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-05-28 17:43:48.294407 | orchestrator | Wednesday 28 May 2025 17:43:45 +0000 (0:00:00.315) 0:00:12.631 ********* 2025-05-28 17:43:48.294418 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-28 17:43:48.294429 | orchestrator | 2025-05-28 17:43:48.294440 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-05-28 17:43:48.294450 | orchestrator | Wednesday 28 May 2025 17:43:45 +0000 (0:00:00.264) 0:00:12.896 ********* 2025-05-28 17:43:48.294466 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:43:48.294477 | orchestrator | 2025-05-28 17:43:48.294488 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-05-28 17:43:48.294498 | orchestrator | Wednesday 28 May 2025 17:43:45 +0000 (0:00:00.233) 0:00:13.129 ********* 2025-05-28 17:43:48.294509 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-28 17:43:48.294520 | orchestrator | 2025-05-28 17:43:48.294530 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-05-28 17:43:48.294541 | orchestrator | Wednesday 28 May 2025 17:43:47 +0000 (0:00:01.640) 0:00:14.770 ********* 2025-05-28 17:43:48.294552 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-28 17:43:48.294562 | orchestrator | 2025-05-28 17:43:48.294573 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-05-28 17:43:48.294583 | orchestrator | Wednesday 28 May 2025 17:43:47 +0000 (0:00:00.274) 0:00:15.044 ********* 2025-05-28 17:43:48.294594 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-28 17:43:48.294604 | orchestrator | 2025-05-28 17:43:48.294623 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-28 17:43:50.684429 | orchestrator | Wednesday 28 May 2025 17:43:48 +0000 (0:00:00.297) 0:00:15.341 ********* 2025-05-28 17:43:50.684532 | orchestrator | 2025-05-28 17:43:50.684548 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-28 17:43:50.684560 | orchestrator | Wednesday 28 May 2025 17:43:48 +0000 (0:00:00.068) 0:00:15.410 ********* 2025-05-28 17:43:50.684571 | orchestrator | 2025-05-28 17:43:50.684582 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-28 17:43:50.684593 | orchestrator | Wednesday 28 May 2025 17:43:48 +0000 (0:00:00.069) 0:00:15.479 ********* 2025-05-28 17:43:50.684603 | orchestrator | 2025-05-28 17:43:50.684614 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-05-28 17:43:50.684625 | orchestrator | Wednesday 28 May 2025 17:43:48 +0000 (0:00:00.071) 0:00:15.551 ********* 2025-05-28 17:43:50.684637 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-28 17:43:50.684647 | orchestrator | 2025-05-28 17:43:50.684658 | orchestrator | TASK [Print report file information] ******************************************* 2025-05-28 17:43:50.684669 | orchestrator | Wednesday 28 May 2025 17:43:49 +0000 (0:00:01.506) 0:00:17.057 ********* 2025-05-28 17:43:50.684679 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-05-28 17:43:50.684691 | orchestrator |  "msg": [ 2025-05-28 17:43:50.684703 | orchestrator |  "Validator run completed.", 2025-05-28 17:43:50.684714 | orchestrator |  "You can find the report file here:", 2025-05-28 17:43:50.684725 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2025-05-28T17:43:33+00:00-report.json", 2025-05-28 17:43:50.684792 | orchestrator |  "on the following host:", 2025-05-28 17:43:50.684805 | orchestrator |  "testbed-manager" 2025-05-28 17:43:50.684816 | orchestrator |  ] 2025-05-28 17:43:50.684826 | orchestrator | } 2025-05-28 17:43:50.684837 | orchestrator | 2025-05-28 17:43:50.684848 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 17:43:50.684881 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-05-28 17:43:50.684916 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 17:43:50.684928 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 17:43:50.684939 | orchestrator | 2025-05-28 17:43:50.684949 | orchestrator | 2025-05-28 17:43:50.684960 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 17:43:50.684972 | orchestrator | Wednesday 28 May 2025 17:43:50 +0000 (0:00:00.585) 0:00:17.643 ********* 2025-05-28 17:43:50.684985 | orchestrator | =============================================================================== 2025-05-28 17:43:50.684997 | orchestrator | Aggregate test results step one ----------------------------------------- 1.64s 2025-05-28 17:43:50.685009 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.55s 2025-05-28 17:43:50.685021 | orchestrator | Write report file ------------------------------------------------------- 1.51s 2025-05-28 17:43:50.685033 | orchestrator | Gather status data ------------------------------------------------------ 1.33s 2025-05-28 17:43:50.685044 | orchestrator | Get container info ------------------------------------------------------ 1.01s 2025-05-28 17:43:50.685057 | orchestrator | Create report output directory ------------------------------------------ 0.80s 2025-05-28 17:43:50.685068 | orchestrator | Aggregate test results step one ----------------------------------------- 0.63s 2025-05-28 17:43:50.685080 | orchestrator | Get timestamp for report file ------------------------------------------- 0.62s 2025-05-28 17:43:50.685092 | orchestrator | Print report file information ------------------------------------------- 0.59s 2025-05-28 17:43:50.685104 | orchestrator | Set test result to passed if container is existing ---------------------- 0.48s 2025-05-28 17:43:50.685116 | orchestrator | Set quorum test data ---------------------------------------------------- 0.35s 2025-05-28 17:43:50.685128 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.33s 2025-05-28 17:43:50.685140 | orchestrator | Fail quorum test if not all monitors are in quorum ---------------------- 0.33s 2025-05-28 17:43:50.685153 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.32s 2025-05-28 17:43:50.685164 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.32s 2025-05-28 17:43:50.685176 | orchestrator | Prepare test data ------------------------------------------------------- 0.31s 2025-05-28 17:43:50.685188 | orchestrator | Set test result to failed if container is missing ----------------------- 0.31s 2025-05-28 17:43:50.685199 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.30s 2025-05-28 17:43:50.685212 | orchestrator | Aggregate test results step three --------------------------------------- 0.30s 2025-05-28 17:43:50.685224 | orchestrator | Prepare test data for container existance test -------------------------- 0.30s 2025-05-28 17:43:50.913052 | orchestrator | + osism validate ceph-mgrs 2025-05-28 17:43:52.588946 | orchestrator | Registering Redlock._acquired_script 2025-05-28 17:43:52.589035 | orchestrator | Registering Redlock._extend_script 2025-05-28 17:43:52.589050 | orchestrator | Registering Redlock._release_script 2025-05-28 17:44:11.346053 | orchestrator | 2025-05-28 17:44:11.346124 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2025-05-28 17:44:11.346134 | orchestrator | 2025-05-28 17:44:11.346143 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-05-28 17:44:11.346151 | orchestrator | Wednesday 28 May 2025 17:43:56 +0000 (0:00:00.432) 0:00:00.432 ********* 2025-05-28 17:44:11.346158 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-28 17:44:11.346166 | orchestrator | 2025-05-28 17:44:11.346174 | orchestrator | TASK [Create report output directory] ****************************************** 2025-05-28 17:44:11.346181 | orchestrator | Wednesday 28 May 2025 17:43:57 +0000 (0:00:00.679) 0:00:01.112 ********* 2025-05-28 17:44:11.346188 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-28 17:44:11.346195 | orchestrator | 2025-05-28 17:44:11.346202 | orchestrator | TASK [Define report vars] ****************************************************** 2025-05-28 17:44:11.346225 | orchestrator | Wednesday 28 May 2025 17:43:58 +0000 (0:00:00.822) 0:00:01.934 ********* 2025-05-28 17:44:11.346233 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:44:11.346241 | orchestrator | 2025-05-28 17:44:11.346248 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-05-28 17:44:11.346264 | orchestrator | Wednesday 28 May 2025 17:43:58 +0000 (0:00:00.246) 0:00:02.181 ********* 2025-05-28 17:44:11.346272 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:44:11.346279 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:44:11.346286 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:44:11.346293 | orchestrator | 2025-05-28 17:44:11.346300 | orchestrator | TASK [Get container info] ****************************************************** 2025-05-28 17:44:11.346308 | orchestrator | Wednesday 28 May 2025 17:43:58 +0000 (0:00:00.310) 0:00:02.491 ********* 2025-05-28 17:44:11.346315 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:44:11.346322 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:44:11.346329 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:44:11.346336 | orchestrator | 2025-05-28 17:44:11.346343 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-05-28 17:44:11.346350 | orchestrator | Wednesday 28 May 2025 17:43:59 +0000 (0:00:00.971) 0:00:03.463 ********* 2025-05-28 17:44:11.346357 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:44:11.346364 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:44:11.346371 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:44:11.346378 | orchestrator | 2025-05-28 17:44:11.346385 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-05-28 17:44:11.346393 | orchestrator | Wednesday 28 May 2025 17:44:00 +0000 (0:00:00.296) 0:00:03.759 ********* 2025-05-28 17:44:11.346400 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:44:11.346407 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:44:11.346414 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:44:11.346421 | orchestrator | 2025-05-28 17:44:11.346428 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-05-28 17:44:11.346435 | orchestrator | Wednesday 28 May 2025 17:44:00 +0000 (0:00:00.493) 0:00:04.253 ********* 2025-05-28 17:44:11.346442 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:44:11.346449 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:44:11.346456 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:44:11.346463 | orchestrator | 2025-05-28 17:44:11.346470 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2025-05-28 17:44:11.346477 | orchestrator | Wednesday 28 May 2025 17:44:01 +0000 (0:00:00.314) 0:00:04.568 ********* 2025-05-28 17:44:11.346484 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:44:11.346491 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:44:11.346498 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:44:11.346505 | orchestrator | 2025-05-28 17:44:11.346512 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2025-05-28 17:44:11.346520 | orchestrator | Wednesday 28 May 2025 17:44:01 +0000 (0:00:00.286) 0:00:04.855 ********* 2025-05-28 17:44:11.346527 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:44:11.346534 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:44:11.346541 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:44:11.346547 | orchestrator | 2025-05-28 17:44:11.346555 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-05-28 17:44:11.346562 | orchestrator | Wednesday 28 May 2025 17:44:01 +0000 (0:00:00.304) 0:00:05.159 ********* 2025-05-28 17:44:11.346569 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:44:11.346576 | orchestrator | 2025-05-28 17:44:11.346583 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-05-28 17:44:11.346591 | orchestrator | Wednesday 28 May 2025 17:44:02 +0000 (0:00:00.643) 0:00:05.802 ********* 2025-05-28 17:44:11.346599 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:44:11.346607 | orchestrator | 2025-05-28 17:44:11.346615 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-05-28 17:44:11.346629 | orchestrator | Wednesday 28 May 2025 17:44:02 +0000 (0:00:00.242) 0:00:06.045 ********* 2025-05-28 17:44:11.346637 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:44:11.346645 | orchestrator | 2025-05-28 17:44:11.346653 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-28 17:44:11.346661 | orchestrator | Wednesday 28 May 2025 17:44:02 +0000 (0:00:00.248) 0:00:06.294 ********* 2025-05-28 17:44:11.346669 | orchestrator | 2025-05-28 17:44:11.346677 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-28 17:44:11.346685 | orchestrator | Wednesday 28 May 2025 17:44:02 +0000 (0:00:00.068) 0:00:06.362 ********* 2025-05-28 17:44:11.346693 | orchestrator | 2025-05-28 17:44:11.346701 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-28 17:44:11.346709 | orchestrator | Wednesday 28 May 2025 17:44:02 +0000 (0:00:00.068) 0:00:06.431 ********* 2025-05-28 17:44:11.346717 | orchestrator | 2025-05-28 17:44:11.346725 | orchestrator | TASK [Print report file information] ******************************************* 2025-05-28 17:44:11.346733 | orchestrator | Wednesday 28 May 2025 17:44:03 +0000 (0:00:00.079) 0:00:06.511 ********* 2025-05-28 17:44:11.346741 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:44:11.346749 | orchestrator | 2025-05-28 17:44:11.346757 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-05-28 17:44:11.346764 | orchestrator | Wednesday 28 May 2025 17:44:03 +0000 (0:00:00.249) 0:00:06.760 ********* 2025-05-28 17:44:11.346772 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:44:11.346780 | orchestrator | 2025-05-28 17:44:11.346814 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2025-05-28 17:44:11.346823 | orchestrator | Wednesday 28 May 2025 17:44:03 +0000 (0:00:00.256) 0:00:07.016 ********* 2025-05-28 17:44:11.346831 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:44:11.346839 | orchestrator | 2025-05-28 17:44:11.346847 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2025-05-28 17:44:11.346855 | orchestrator | Wednesday 28 May 2025 17:44:03 +0000 (0:00:00.115) 0:00:07.132 ********* 2025-05-28 17:44:11.346863 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:44:11.346871 | orchestrator | 2025-05-28 17:44:11.346879 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2025-05-28 17:44:11.346887 | orchestrator | Wednesday 28 May 2025 17:44:05 +0000 (0:00:01.940) 0:00:09.072 ********* 2025-05-28 17:44:11.346895 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:44:11.346903 | orchestrator | 2025-05-28 17:44:11.346911 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2025-05-28 17:44:11.346918 | orchestrator | Wednesday 28 May 2025 17:44:05 +0000 (0:00:00.264) 0:00:09.337 ********* 2025-05-28 17:44:11.346927 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:44:11.346935 | orchestrator | 2025-05-28 17:44:11.346944 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2025-05-28 17:44:11.346952 | orchestrator | Wednesday 28 May 2025 17:44:06 +0000 (0:00:00.702) 0:00:10.040 ********* 2025-05-28 17:44:11.346959 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:44:11.346996 | orchestrator | 2025-05-28 17:44:11.347003 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2025-05-28 17:44:11.347010 | orchestrator | Wednesday 28 May 2025 17:44:06 +0000 (0:00:00.136) 0:00:10.177 ********* 2025-05-28 17:44:11.347017 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:44:11.347025 | orchestrator | 2025-05-28 17:44:11.347032 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-05-28 17:44:11.347039 | orchestrator | Wednesday 28 May 2025 17:44:06 +0000 (0:00:00.138) 0:00:10.315 ********* 2025-05-28 17:44:11.347046 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-28 17:44:11.347053 | orchestrator | 2025-05-28 17:44:11.347060 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-05-28 17:44:11.347067 | orchestrator | Wednesday 28 May 2025 17:44:07 +0000 (0:00:00.244) 0:00:10.560 ********* 2025-05-28 17:44:11.347074 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:44:11.347086 | orchestrator | 2025-05-28 17:44:11.347093 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-05-28 17:44:11.347101 | orchestrator | Wednesday 28 May 2025 17:44:07 +0000 (0:00:00.251) 0:00:10.811 ********* 2025-05-28 17:44:11.347108 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-28 17:44:11.347115 | orchestrator | 2025-05-28 17:44:11.347122 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-05-28 17:44:11.347129 | orchestrator | Wednesday 28 May 2025 17:44:08 +0000 (0:00:01.241) 0:00:12.052 ********* 2025-05-28 17:44:11.347136 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-28 17:44:11.347143 | orchestrator | 2025-05-28 17:44:11.347150 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-05-28 17:44:11.347157 | orchestrator | Wednesday 28 May 2025 17:44:08 +0000 (0:00:00.253) 0:00:12.306 ********* 2025-05-28 17:44:11.347164 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-28 17:44:11.347171 | orchestrator | 2025-05-28 17:44:11.347178 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-28 17:44:11.347185 | orchestrator | Wednesday 28 May 2025 17:44:09 +0000 (0:00:00.260) 0:00:12.567 ********* 2025-05-28 17:44:11.347192 | orchestrator | 2025-05-28 17:44:11.347199 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-28 17:44:11.347207 | orchestrator | Wednesday 28 May 2025 17:44:09 +0000 (0:00:00.067) 0:00:12.635 ********* 2025-05-28 17:44:11.347213 | orchestrator | 2025-05-28 17:44:11.347221 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-28 17:44:11.347228 | orchestrator | Wednesday 28 May 2025 17:44:09 +0000 (0:00:00.067) 0:00:12.703 ********* 2025-05-28 17:44:11.347235 | orchestrator | 2025-05-28 17:44:11.347242 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-05-28 17:44:11.347249 | orchestrator | Wednesday 28 May 2025 17:44:09 +0000 (0:00:00.069) 0:00:12.773 ********* 2025-05-28 17:44:11.347256 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-28 17:44:11.347263 | orchestrator | 2025-05-28 17:44:11.347270 | orchestrator | TASK [Print report file information] ******************************************* 2025-05-28 17:44:11.347277 | orchestrator | Wednesday 28 May 2025 17:44:10 +0000 (0:00:01.638) 0:00:14.411 ********* 2025-05-28 17:44:11.347284 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-05-28 17:44:11.347291 | orchestrator |  "msg": [ 2025-05-28 17:44:11.347298 | orchestrator |  "Validator run completed.", 2025-05-28 17:44:11.347305 | orchestrator |  "You can find the report file here:", 2025-05-28 17:44:11.347312 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2025-05-28T17:43:57+00:00-report.json", 2025-05-28 17:44:11.347320 | orchestrator |  "on the following host:", 2025-05-28 17:44:11.347327 | orchestrator |  "testbed-manager" 2025-05-28 17:44:11.347334 | orchestrator |  ] 2025-05-28 17:44:11.347342 | orchestrator | } 2025-05-28 17:44:11.347349 | orchestrator | 2025-05-28 17:44:11.347356 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 17:44:11.347363 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-05-28 17:44:11.347371 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 17:44:11.347384 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 17:44:11.631179 | orchestrator | 2025-05-28 17:44:11.631260 | orchestrator | 2025-05-28 17:44:11.631275 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 17:44:11.631287 | orchestrator | Wednesday 28 May 2025 17:44:11 +0000 (0:00:00.403) 0:00:14.815 ********* 2025-05-28 17:44:11.631298 | orchestrator | =============================================================================== 2025-05-28 17:44:11.631331 | orchestrator | Gather list of mgr modules ---------------------------------------------- 1.94s 2025-05-28 17:44:11.631342 | orchestrator | Write report file ------------------------------------------------------- 1.64s 2025-05-28 17:44:11.631352 | orchestrator | Aggregate test results step one ----------------------------------------- 1.24s 2025-05-28 17:44:11.631378 | orchestrator | Get container info ------------------------------------------------------ 0.97s 2025-05-28 17:44:11.631389 | orchestrator | Create report output directory ------------------------------------------ 0.82s 2025-05-28 17:44:11.631400 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.70s 2025-05-28 17:44:11.631410 | orchestrator | Get timestamp for report file ------------------------------------------- 0.68s 2025-05-28 17:44:11.631425 | orchestrator | Aggregate test results step one ----------------------------------------- 0.64s 2025-05-28 17:44:11.631436 | orchestrator | Set test result to passed if container is existing ---------------------- 0.49s 2025-05-28 17:44:11.631447 | orchestrator | Print report file information ------------------------------------------- 0.40s 2025-05-28 17:44:11.631457 | orchestrator | Prepare test data ------------------------------------------------------- 0.31s 2025-05-28 17:44:11.631468 | orchestrator | Prepare test data for container existance test -------------------------- 0.31s 2025-05-28 17:44:11.631479 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.30s 2025-05-28 17:44:11.631490 | orchestrator | Set test result to failed if container is missing ----------------------- 0.30s 2025-05-28 17:44:11.631500 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.29s 2025-05-28 17:44:11.631511 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.26s 2025-05-28 17:44:11.631521 | orchestrator | Aggregate test results step three --------------------------------------- 0.26s 2025-05-28 17:44:11.631532 | orchestrator | Fail due to missing containers ------------------------------------------ 0.26s 2025-05-28 17:44:11.631542 | orchestrator | Aggregate test results step two ----------------------------------------- 0.25s 2025-05-28 17:44:11.631553 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.25s 2025-05-28 17:44:11.845626 | orchestrator | + osism validate ceph-osds 2025-05-28 17:44:13.503114 | orchestrator | Registering Redlock._acquired_script 2025-05-28 17:44:13.503198 | orchestrator | Registering Redlock._extend_script 2025-05-28 17:44:13.503210 | orchestrator | Registering Redlock._release_script 2025-05-28 17:44:22.146899 | orchestrator | 2025-05-28 17:44:22.147000 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2025-05-28 17:44:22.147012 | orchestrator | 2025-05-28 17:44:22.147019 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-05-28 17:44:22.147027 | orchestrator | Wednesday 28 May 2025 17:44:17 +0000 (0:00:00.409) 0:00:00.409 ********* 2025-05-28 17:44:22.147035 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-28 17:44:22.147041 | orchestrator | 2025-05-28 17:44:22.147049 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-28 17:44:22.147056 | orchestrator | Wednesday 28 May 2025 17:44:18 +0000 (0:00:00.629) 0:00:01.039 ********* 2025-05-28 17:44:22.147063 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-28 17:44:22.147069 | orchestrator | 2025-05-28 17:44:22.147076 | orchestrator | TASK [Create report output directory] ****************************************** 2025-05-28 17:44:22.147083 | orchestrator | Wednesday 28 May 2025 17:44:18 +0000 (0:00:00.395) 0:00:01.434 ********* 2025-05-28 17:44:22.147090 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-28 17:44:22.147097 | orchestrator | 2025-05-28 17:44:22.147104 | orchestrator | TASK [Define report vars] ****************************************************** 2025-05-28 17:44:22.147111 | orchestrator | Wednesday 28 May 2025 17:44:19 +0000 (0:00:00.931) 0:00:02.366 ********* 2025-05-28 17:44:22.147118 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:44:22.147127 | orchestrator | 2025-05-28 17:44:22.147133 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-05-28 17:44:22.147160 | orchestrator | Wednesday 28 May 2025 17:44:19 +0000 (0:00:00.121) 0:00:02.488 ********* 2025-05-28 17:44:22.147168 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:44:22.147175 | orchestrator | 2025-05-28 17:44:22.147182 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-05-28 17:44:22.147189 | orchestrator | Wednesday 28 May 2025 17:44:20 +0000 (0:00:00.134) 0:00:02.622 ********* 2025-05-28 17:44:22.147196 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:44:22.147202 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:44:22.147209 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:44:22.147215 | orchestrator | 2025-05-28 17:44:22.147222 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-05-28 17:44:22.147230 | orchestrator | Wednesday 28 May 2025 17:44:20 +0000 (0:00:00.317) 0:00:02.940 ********* 2025-05-28 17:44:22.147237 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:44:22.147244 | orchestrator | 2025-05-28 17:44:22.147251 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-05-28 17:44:22.147258 | orchestrator | Wednesday 28 May 2025 17:44:20 +0000 (0:00:00.152) 0:00:03.093 ********* 2025-05-28 17:44:22.147265 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:44:22.147272 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:44:22.147278 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:44:22.147285 | orchestrator | 2025-05-28 17:44:22.147292 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2025-05-28 17:44:22.147298 | orchestrator | Wednesday 28 May 2025 17:44:20 +0000 (0:00:00.319) 0:00:03.412 ********* 2025-05-28 17:44:22.147305 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:44:22.147313 | orchestrator | 2025-05-28 17:44:22.147320 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-05-28 17:44:22.147327 | orchestrator | Wednesday 28 May 2025 17:44:21 +0000 (0:00:00.556) 0:00:03.969 ********* 2025-05-28 17:44:22.147334 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:44:22.147341 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:44:22.147348 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:44:22.147355 | orchestrator | 2025-05-28 17:44:22.147363 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2025-05-28 17:44:22.147370 | orchestrator | Wednesday 28 May 2025 17:44:21 +0000 (0:00:00.451) 0:00:04.420 ********* 2025-05-28 17:44:22.147379 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'de3fc3fe762520a884a20d3fbe0394b0419d3b0e4c1e5b723b774ce416dc9a20', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-05-28 17:44:22.147401 | orchestrator | skipping: [testbed-node-3] => (item={'id': '9aaa2dc02370a101c25c6edb529be3cf997caa2d1d455644513b767c659fee19', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-05-28 17:44:22.147410 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'af99e7235439cb84eb2cd40aa2a05f5a32a50e943105ba318754d423a331903b', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-05-28 17:44:22.147420 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b820e29efc251b4bafe9a46eaf8017ec173c6753aa3dc08fcf3cedfe6dcd29bb', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-05-28 17:44:22.147433 | orchestrator | skipping: [testbed-node-3] => (item={'id': '1b2350a167e779c04c97da7b7477430c830879dbf2f2cc984ef38defcdb5d1c8', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-05-28 17:44:22.147456 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd7032c914c15933f432d9c288d29af907f585b2da12417e360996b07c56e5e8b', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-05-28 17:44:22.147470 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e57eb1670c6fe12090f004025b6a280d193dfc8e34b3583ed39eee75443b19e6', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2025-05-28 17:44:22.147478 | orchestrator | skipping: [testbed-node-3] => (item={'id': '3e5a85a6dbcbb918b5d82cdd2b2e9cfd8708d4b0d56d23c538d10d95055ae6f1', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2025-05-28 17:44:22.147486 | orchestrator | skipping: [testbed-node-3] => (item={'id': '16abb75595d03fe80d768459b450cb6ef3b9bd0c8cf51579a47ab0a7a095ef30', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2025-05-28 17:44:22.147497 | orchestrator | skipping: [testbed-node-3] => (item={'id': '363abf57df53541ad4571cdd8991726b3dc8a0f330b06b86611bc9ea34fdc96f', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 20 minutes'})  2025-05-28 17:44:22.147505 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c8743c2105d22338cf043120f5aec3052d61c76dc1095a041ea7a6835b80dbd5', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 22 minutes'})  2025-05-28 17:44:22.147513 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c965ebc3fb7bdf42962316a90160be11e899339939c5ef2252e08086029d2de3', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 22 minutes'})  2025-05-28 17:44:22.147521 | orchestrator | ok: [testbed-node-3] => (item={'id': 'ff8b6a59705d05a6b33d000d78e4cf700b0885d6ff29733fdce92f906b4e0b5b', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 23 minutes'}) 2025-05-28 17:44:22.147529 | orchestrator | ok: [testbed-node-3] => (item={'id': '8f879244f38733c2a87f8a6c4bbbe19ddb13d6b24290601bc7f702b893a69568', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 23 minutes'}) 2025-05-28 17:44:22.147537 | orchestrator | skipping: [testbed-node-3] => (item={'id': '9084a5e0cf94fe3189dbebfe1bb1dd2d873a24344a1808a8b27016716d2e7cba', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 26 minutes'})  2025-05-28 17:44:22.147545 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e4ee6c54d67690363d90746c53a3d160ce1ffc26b36c283ae37e38e10832a661', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-05-28 17:44:22.147552 | orchestrator | skipping: [testbed-node-3] => (item={'id': '06b8a6b0cb23b48c664bf99f69aa0a2e14826a1c2a2f780e2428fdde2e24ad97', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2025-05-28 17:44:22.147559 | orchestrator | skipping: [testbed-node-4] => (item={'id': '513de66aed001b113cca4f122974e7ae19680640ace43c3c5413c3c3637e55d4', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-05-28 17:44:22.147567 | orchestrator | skipping: [testbed-node-3] => (item={'id': '226d8cd1528416dbe157ebb414a2465f2e2de598ad2f6e81724c6a79f0ee1990', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2025-05-28 17:44:22.147575 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c8d96312eb84799f4f5fc4669633f8cb4d281b6e57e39f15a395fca9ce63e7f1', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-05-28 17:44:22.147589 | orchestrator | skipping: [testbed-node-3] => (item={'id': '749a237513a811431fee358d735afd43de579e7c4cb9c7dcc391b394239342d9', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 29 minutes'})  2025-05-28 17:44:22.147602 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'd7bf035b1d09556169560cf6229f06e0cacd84c4d183cb5efe04ad418e0ec999', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-05-28 17:44:22.302450 | orchestrator | skipping: [testbed-node-4] => (item={'id': '8d7c8793bee7cef90227a96b1c3d23ad35127863665ce255901cfa769f80b9f0', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-05-28 17:44:22.303406 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8fefc41966bb40ce95e8b10c00a61c94228e0754067c9aaafcf11bf38fa39777', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 29 minutes'})  2025-05-28 17:44:22.303449 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd1c806913cee44ae5c0670450013d98ce335224cab68a8a6a080a8d059fa2fdb', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 30 minutes'})  2025-05-28 17:44:22.303463 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ca533a0aac63cff6d48074136bd1ee0191f3671f25fdbb76abde7ab6400ad585', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-05-28 17:44:22.303475 | orchestrator | skipping: [testbed-node-4] => (item={'id': '347872c393299e7afe3bf81446e1a7048c1ad9b8d36fa8d230a7a668f641373d', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2025-05-28 17:44:22.303487 | orchestrator | skipping: [testbed-node-4] => (item={'id': '427af8157ad0fe286dec6a81da46456d7ae1d2cc789bdc346211d8aa8115e235', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2025-05-28 17:44:22.303499 | orchestrator | skipping: [testbed-node-4] => (item={'id': '063c52fa083ea757e310c15a566c5f9ac67d3ca69354edcb0396018f7df5aecd', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2025-05-28 17:44:22.303511 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b078e3dbcaffa8b3572c6cc3920805fb0b8cf10d33dbf88943eb683f2f5e0275', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 20 minutes'})  2025-05-28 17:44:22.303521 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ea4e99c137f6f10dfb680e14beb50b7c8fd82fcdd710ca435f35204e39dfd2dc', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 22 minutes'})  2025-05-28 17:44:22.303553 | orchestrator | skipping: [testbed-node-4] => (item={'id': '88790daa4bb369b37f8d5b21032eba6597c7f0802c8736975a56951568c99f5d', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 22 minutes'})  2025-05-28 17:44:22.303570 | orchestrator | ok: [testbed-node-4] => (item={'id': 'd61d41c748fcc5556349a29007043a2dead6078cd3ceb5c7a85e03d29fb76163', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 23 minutes'}) 2025-05-28 17:44:22.303582 | orchestrator | ok: [testbed-node-4] => (item={'id': '42d28658383a7043ad32a29e4c11a587985af493d5e99b22bc2f4c8b9183095d', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 23 minutes'}) 2025-05-28 17:44:22.303614 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ec0ba2323964d27d70ced3655269d78c87ed82b2f45e2345782bffdabf27065f', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 26 minutes'})  2025-05-28 17:44:22.303626 | orchestrator | skipping: [testbed-node-4] => (item={'id': '2946bf7fbe6cbd6ce38c951f23f462bc07d34c201f1d44a459d378d4ea4abe07', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2025-05-28 17:44:22.303637 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'd746a97947ef7cc5a1a13e4fee1ebf7cad8ab5a4614b821c866479f00a0ae586', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2025-05-28 17:44:22.303670 | orchestrator | skipping: [testbed-node-4] => (item={'id': '1519a47e80afacb87ccc87aa209728ea218b0d890e55d469189204262ba6b815', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 29 minutes'})  2025-05-28 17:44:22.303682 | orchestrator | skipping: [testbed-node-4] => (item={'id': '45959bfc06395b9159b9571140ee830801343cdca4354f3f08077f6e9fbce61c', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 29 minutes'})  2025-05-28 17:44:22.303693 | orchestrator | skipping: [testbed-node-4] => (item={'id': '9ac49f9f3ee959f9bad381c94cc01fd9e5695180b5d0ce38b32a1880c0de4dee', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 30 minutes'})  2025-05-28 17:44:22.303705 | orchestrator | skipping: [testbed-node-5] => (item={'id': '1ed9a0aaacc697c3cfc1f0e2f95f656105a6189a5039b5b9abc52c97aaa61127', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-05-28 17:44:22.303716 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'a5b90c695d5b2887e7c87d4ed360f6229ed070e7e9993e465615c069859adb68', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-05-28 17:44:22.303727 | orchestrator | skipping: [testbed-node-5] => (item={'id': '605378f812eb02d43f33e81413a5c8d37131f9f5451356626c899e2aad2d2de6', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-05-28 17:44:22.303738 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ceea499969d7d1f1f5bbc8e8fa228be4725a13bfbdf1101ed268414c9d11ff56', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-05-28 17:44:22.303749 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f01ccc3627b37f327c354b0d3da623ab41ee558ecd9000489315fd2094a8b9bc', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-05-28 17:44:22.303760 | orchestrator | skipping: [testbed-node-5] => (item={'id': '7cc5bc64847517820bda81dcc13bd3892aa5873afc0ba3d92fceb6234d5b924e', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-05-28 17:44:22.303771 | orchestrator | skipping: [testbed-node-5] => (item={'id': '69695194f152b1b078061c65986275095321a5710e9a2f01743964aadead96b1', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2025-05-28 17:44:22.303781 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c2e43fc012a1c37a7dab029d03369c7ac308f2f8486b3a910127d5f4ce6e6907', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2025-05-28 17:44:22.303805 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'df91663eb60c99e2ea83cad3df5ae7418eab77c6947b2619f206c7dfe3c332d0', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2025-05-28 17:44:22.303894 | orchestrator | skipping: [testbed-node-5] => (item={'id': '85ff1df4694117f73b0c4c4a5b16b2fc8c03b75ea1c70df75247cbd39eac519b', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 20 minutes'})  2025-05-28 17:44:22.303907 | orchestrator | skipping: [testbed-node-5] => (item={'id': '5c240227e32f1823a46460214f4b10a820d867df266373c7c3a57dcf5ec63400', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 22 minutes'})  2025-05-28 17:44:22.303918 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ccd98803254f36fe31c0e22cb8aeb1261a43c925868d86e70e9a375e47261df7', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 22 minutes'})  2025-05-28 17:44:22.303929 | orchestrator | ok: [testbed-node-5] => (item={'id': 'ae57c5a45b56aa93a803b1535d5b2fca82e7f17bf21be6c825f898580c5838a3', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 23 minutes'}) 2025-05-28 17:44:22.303949 | orchestrator | ok: [testbed-node-5] => (item={'id': 'aed4f3be601bb8acc7e13ad795ef55df4a32c67176a1ecad1915af0b94eba2cf', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 23 minutes'}) 2025-05-28 17:44:33.196760 | orchestrator | skipping: [testbed-node-5] => (item={'id': '5cbe7508188fe64e8ef2b1c488aab574bc596fa7d3db2eb85f6ef582c3d6e568', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 26 minutes'})  2025-05-28 17:44:33.196930 | orchestrator | skipping: [testbed-node-5] => (item={'id': '27a28ba337cb672c4204533885633f9d185d536d52f10a38e6b26c655cbea4c7', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2025-05-28 17:44:33.196948 | orchestrator | skipping: [testbed-node-5] => (item={'id': '7ce513f448d0682af5f235120912409e3079d6b6cc5e0054a1ca5357d723bfe7', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2025-05-28 17:44:33.196960 | orchestrator | skipping: [testbed-node-5] => (item={'id': '9c9e8231260cba89ebc2921a18374711ef0bb6a4bd0227972b8d6d5d3439aa91', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 29 minutes'})  2025-05-28 17:44:33.196969 | orchestrator | skipping: [testbed-node-5] => (item={'id': '3c166a01e654da38869a87064081951d5fde5bf10d9ba731255c827719ae14e9', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 29 minutes'})  2025-05-28 17:44:33.196980 | orchestrator | skipping: [testbed-node-5] => (item={'id': '45e78380f548d9bcd86bfbfb53945c812d04197d39d914235afc7d83e7f1a504', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 30 minutes'})  2025-05-28 17:44:33.196990 | orchestrator | 2025-05-28 17:44:33.197001 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2025-05-28 17:44:33.197012 | orchestrator | Wednesday 28 May 2025 17:44:22 +0000 (0:00:00.458) 0:00:04.878 ********* 2025-05-28 17:44:33.197021 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:44:33.197031 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:44:33.197041 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:44:33.197050 | orchestrator | 2025-05-28 17:44:33.197060 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2025-05-28 17:44:33.197095 | orchestrator | Wednesday 28 May 2025 17:44:22 +0000 (0:00:00.292) 0:00:05.171 ********* 2025-05-28 17:44:33.197105 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:44:33.197115 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:44:33.197125 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:44:33.197134 | orchestrator | 2025-05-28 17:44:33.197144 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2025-05-28 17:44:33.197153 | orchestrator | Wednesday 28 May 2025 17:44:23 +0000 (0:00:00.489) 0:00:05.660 ********* 2025-05-28 17:44:33.197163 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:44:33.197172 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:44:33.197181 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:44:33.197190 | orchestrator | 2025-05-28 17:44:33.197199 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-05-28 17:44:33.197209 | orchestrator | Wednesday 28 May 2025 17:44:23 +0000 (0:00:00.311) 0:00:05.972 ********* 2025-05-28 17:44:33.197218 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:44:33.197241 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:44:33.197250 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:44:33.197260 | orchestrator | 2025-05-28 17:44:33.197269 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2025-05-28 17:44:33.197278 | orchestrator | Wednesday 28 May 2025 17:44:23 +0000 (0:00:00.271) 0:00:06.244 ********* 2025-05-28 17:44:33.197288 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2025-05-28 17:44:33.197298 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2025-05-28 17:44:33.197308 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:44:33.197317 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2025-05-28 17:44:33.197326 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2025-05-28 17:44:33.197336 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:44:33.197345 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2025-05-28 17:44:33.197354 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2025-05-28 17:44:33.197363 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:44:33.197372 | orchestrator | 2025-05-28 17:44:33.197382 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2025-05-28 17:44:33.197391 | orchestrator | Wednesday 28 May 2025 17:44:24 +0000 (0:00:00.305) 0:00:06.550 ********* 2025-05-28 17:44:33.197400 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:44:33.197410 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:44:33.197419 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:44:33.197428 | orchestrator | 2025-05-28 17:44:33.197437 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-05-28 17:44:33.197447 | orchestrator | Wednesday 28 May 2025 17:44:24 +0000 (0:00:00.482) 0:00:07.033 ********* 2025-05-28 17:44:33.197456 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:44:33.197465 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:44:33.197475 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:44:33.197484 | orchestrator | 2025-05-28 17:44:33.197509 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-05-28 17:44:33.197519 | orchestrator | Wednesday 28 May 2025 17:44:24 +0000 (0:00:00.293) 0:00:07.326 ********* 2025-05-28 17:44:33.197528 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:44:33.197537 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:44:33.197546 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:44:33.197555 | orchestrator | 2025-05-28 17:44:33.197565 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2025-05-28 17:44:33.197574 | orchestrator | Wednesday 28 May 2025 17:44:25 +0000 (0:00:00.285) 0:00:07.612 ********* 2025-05-28 17:44:33.197583 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:44:33.197599 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:44:33.197609 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:44:33.197618 | orchestrator | 2025-05-28 17:44:33.197627 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-05-28 17:44:33.197636 | orchestrator | Wednesday 28 May 2025 17:44:25 +0000 (0:00:00.294) 0:00:07.907 ********* 2025-05-28 17:44:33.197646 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:44:33.197655 | orchestrator | 2025-05-28 17:44:33.197664 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-05-28 17:44:33.197673 | orchestrator | Wednesday 28 May 2025 17:44:26 +0000 (0:00:00.633) 0:00:08.540 ********* 2025-05-28 17:44:33.197682 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:44:33.197692 | orchestrator | 2025-05-28 17:44:33.197701 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-05-28 17:44:33.197710 | orchestrator | Wednesday 28 May 2025 17:44:26 +0000 (0:00:00.256) 0:00:08.796 ********* 2025-05-28 17:44:33.197719 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:44:33.197728 | orchestrator | 2025-05-28 17:44:33.197738 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-28 17:44:33.197747 | orchestrator | Wednesday 28 May 2025 17:44:26 +0000 (0:00:00.256) 0:00:09.053 ********* 2025-05-28 17:44:33.197756 | orchestrator | 2025-05-28 17:44:33.197765 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-28 17:44:33.197774 | orchestrator | Wednesday 28 May 2025 17:44:26 +0000 (0:00:00.066) 0:00:09.119 ********* 2025-05-28 17:44:33.197784 | orchestrator | 2025-05-28 17:44:33.197793 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-28 17:44:33.197802 | orchestrator | Wednesday 28 May 2025 17:44:26 +0000 (0:00:00.066) 0:00:09.186 ********* 2025-05-28 17:44:33.197811 | orchestrator | 2025-05-28 17:44:33.197820 | orchestrator | TASK [Print report file information] ******************************************* 2025-05-28 17:44:33.197829 | orchestrator | Wednesday 28 May 2025 17:44:26 +0000 (0:00:00.073) 0:00:09.260 ********* 2025-05-28 17:44:33.197857 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:44:33.197867 | orchestrator | 2025-05-28 17:44:33.197876 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2025-05-28 17:44:33.197886 | orchestrator | Wednesday 28 May 2025 17:44:26 +0000 (0:00:00.238) 0:00:09.499 ********* 2025-05-28 17:44:33.197895 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:44:33.197904 | orchestrator | 2025-05-28 17:44:33.197913 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-05-28 17:44:33.197922 | orchestrator | Wednesday 28 May 2025 17:44:27 +0000 (0:00:00.235) 0:00:09.735 ********* 2025-05-28 17:44:33.197932 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:44:33.197941 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:44:33.197950 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:44:33.198001 | orchestrator | 2025-05-28 17:44:33.198012 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2025-05-28 17:44:33.198077 | orchestrator | Wednesday 28 May 2025 17:44:27 +0000 (0:00:00.282) 0:00:10.017 ********* 2025-05-28 17:44:33.198087 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:44:33.198097 | orchestrator | 2025-05-28 17:44:33.198106 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2025-05-28 17:44:33.198116 | orchestrator | Wednesday 28 May 2025 17:44:28 +0000 (0:00:00.604) 0:00:10.622 ********* 2025-05-28 17:44:33.198126 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-28 17:44:33.198135 | orchestrator | 2025-05-28 17:44:33.198145 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2025-05-28 17:44:33.198155 | orchestrator | Wednesday 28 May 2025 17:44:29 +0000 (0:00:01.638) 0:00:12.261 ********* 2025-05-28 17:44:33.198164 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:44:33.198174 | orchestrator | 2025-05-28 17:44:33.198183 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2025-05-28 17:44:33.198193 | orchestrator | Wednesday 28 May 2025 17:44:29 +0000 (0:00:00.126) 0:00:12.388 ********* 2025-05-28 17:44:33.198210 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:44:33.198219 | orchestrator | 2025-05-28 17:44:33.198229 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2025-05-28 17:44:33.198238 | orchestrator | Wednesday 28 May 2025 17:44:30 +0000 (0:00:00.298) 0:00:12.686 ********* 2025-05-28 17:44:33.198248 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:44:33.198257 | orchestrator | 2025-05-28 17:44:33.198267 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2025-05-28 17:44:33.198276 | orchestrator | Wednesday 28 May 2025 17:44:30 +0000 (0:00:00.112) 0:00:12.799 ********* 2025-05-28 17:44:33.198286 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:44:33.198295 | orchestrator | 2025-05-28 17:44:33.198305 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-05-28 17:44:33.198314 | orchestrator | Wednesday 28 May 2025 17:44:30 +0000 (0:00:00.120) 0:00:12.920 ********* 2025-05-28 17:44:33.198324 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:44:33.198333 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:44:33.198342 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:44:33.198352 | orchestrator | 2025-05-28 17:44:33.198361 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2025-05-28 17:44:33.198371 | orchestrator | Wednesday 28 May 2025 17:44:30 +0000 (0:00:00.279) 0:00:13.199 ********* 2025-05-28 17:44:33.198381 | orchestrator | changed: [testbed-node-3] 2025-05-28 17:44:33.198390 | orchestrator | changed: [testbed-node-4] 2025-05-28 17:44:33.198400 | orchestrator | changed: [testbed-node-5] 2025-05-28 17:44:33.198409 | orchestrator | 2025-05-28 17:44:33.198419 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2025-05-28 17:44:33.198474 | orchestrator | Wednesday 28 May 2025 17:44:33 +0000 (0:00:02.479) 0:00:15.678 ********* 2025-05-28 17:44:42.338375 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:44:42.338493 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:44:42.338507 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:44:42.338519 | orchestrator | 2025-05-28 17:44:42.338533 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2025-05-28 17:44:42.338545 | orchestrator | Wednesday 28 May 2025 17:44:33 +0000 (0:00:00.302) 0:00:15.981 ********* 2025-05-28 17:44:42.338556 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:44:42.338567 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:44:42.338577 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:44:42.338588 | orchestrator | 2025-05-28 17:44:42.338599 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2025-05-28 17:44:42.338610 | orchestrator | Wednesday 28 May 2025 17:44:33 +0000 (0:00:00.493) 0:00:16.475 ********* 2025-05-28 17:44:42.338620 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:44:42.338632 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:44:42.338643 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:44:42.338653 | orchestrator | 2025-05-28 17:44:42.338664 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2025-05-28 17:44:42.338675 | orchestrator | Wednesday 28 May 2025 17:44:34 +0000 (0:00:00.294) 0:00:16.769 ********* 2025-05-28 17:44:42.338686 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:44:42.338696 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:44:42.338707 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:44:42.338717 | orchestrator | 2025-05-28 17:44:42.338728 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2025-05-28 17:44:42.338739 | orchestrator | Wednesday 28 May 2025 17:44:34 +0000 (0:00:00.471) 0:00:17.241 ********* 2025-05-28 17:44:42.338749 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:44:42.338760 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:44:42.338770 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:44:42.338781 | orchestrator | 2025-05-28 17:44:42.338792 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2025-05-28 17:44:42.338802 | orchestrator | Wednesday 28 May 2025 17:44:35 +0000 (0:00:00.272) 0:00:17.513 ********* 2025-05-28 17:44:42.338813 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:44:42.338849 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:44:42.338888 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:44:42.338900 | orchestrator | 2025-05-28 17:44:42.338913 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-05-28 17:44:42.338925 | orchestrator | Wednesday 28 May 2025 17:44:35 +0000 (0:00:00.273) 0:00:17.787 ********* 2025-05-28 17:44:42.338937 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:44:42.338948 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:44:42.338960 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:44:42.338972 | orchestrator | 2025-05-28 17:44:42.338983 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2025-05-28 17:44:42.338995 | orchestrator | Wednesday 28 May 2025 17:44:35 +0000 (0:00:00.493) 0:00:18.280 ********* 2025-05-28 17:44:42.339007 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:44:42.339018 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:44:42.339030 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:44:42.339042 | orchestrator | 2025-05-28 17:44:42.339054 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2025-05-28 17:44:42.339066 | orchestrator | Wednesday 28 May 2025 17:44:36 +0000 (0:00:00.756) 0:00:19.037 ********* 2025-05-28 17:44:42.339078 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:44:42.339090 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:44:42.339102 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:44:42.339114 | orchestrator | 2025-05-28 17:44:42.339124 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2025-05-28 17:44:42.339135 | orchestrator | Wednesday 28 May 2025 17:44:36 +0000 (0:00:00.299) 0:00:19.336 ********* 2025-05-28 17:44:42.339146 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:44:42.339156 | orchestrator | skipping: [testbed-node-4] 2025-05-28 17:44:42.339184 | orchestrator | skipping: [testbed-node-5] 2025-05-28 17:44:42.339195 | orchestrator | 2025-05-28 17:44:42.339206 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2025-05-28 17:44:42.339216 | orchestrator | Wednesday 28 May 2025 17:44:37 +0000 (0:00:00.276) 0:00:19.612 ********* 2025-05-28 17:44:42.339227 | orchestrator | ok: [testbed-node-3] 2025-05-28 17:44:42.339238 | orchestrator | ok: [testbed-node-4] 2025-05-28 17:44:42.339248 | orchestrator | ok: [testbed-node-5] 2025-05-28 17:44:42.339259 | orchestrator | 2025-05-28 17:44:42.339269 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-05-28 17:44:42.339280 | orchestrator | Wednesday 28 May 2025 17:44:37 +0000 (0:00:00.471) 0:00:20.084 ********* 2025-05-28 17:44:42.339291 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-28 17:44:42.339301 | orchestrator | 2025-05-28 17:44:42.339312 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-05-28 17:44:42.339323 | orchestrator | Wednesday 28 May 2025 17:44:37 +0000 (0:00:00.244) 0:00:20.329 ********* 2025-05-28 17:44:42.339334 | orchestrator | skipping: [testbed-node-3] 2025-05-28 17:44:42.339344 | orchestrator | 2025-05-28 17:44:42.339355 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-05-28 17:44:42.339365 | orchestrator | Wednesday 28 May 2025 17:44:38 +0000 (0:00:00.246) 0:00:20.575 ********* 2025-05-28 17:44:42.339376 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-28 17:44:42.339387 | orchestrator | 2025-05-28 17:44:42.339397 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-05-28 17:44:42.339408 | orchestrator | Wednesday 28 May 2025 17:44:39 +0000 (0:00:01.552) 0:00:22.128 ********* 2025-05-28 17:44:42.339420 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-28 17:44:42.339430 | orchestrator | 2025-05-28 17:44:42.339441 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-05-28 17:44:42.339452 | orchestrator | Wednesday 28 May 2025 17:44:39 +0000 (0:00:00.247) 0:00:22.375 ********* 2025-05-28 17:44:42.339462 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-28 17:44:42.339473 | orchestrator | 2025-05-28 17:44:42.339484 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-28 17:44:42.339503 | orchestrator | Wednesday 28 May 2025 17:44:40 +0000 (0:00:00.245) 0:00:22.621 ********* 2025-05-28 17:44:42.339514 | orchestrator | 2025-05-28 17:44:42.339542 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-28 17:44:42.339553 | orchestrator | Wednesday 28 May 2025 17:44:40 +0000 (0:00:00.067) 0:00:22.688 ********* 2025-05-28 17:44:42.339564 | orchestrator | 2025-05-28 17:44:42.339575 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-28 17:44:42.339585 | orchestrator | Wednesday 28 May 2025 17:44:40 +0000 (0:00:00.072) 0:00:22.760 ********* 2025-05-28 17:44:42.339596 | orchestrator | 2025-05-28 17:44:42.339606 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-05-28 17:44:42.339617 | orchestrator | Wednesday 28 May 2025 17:44:40 +0000 (0:00:00.072) 0:00:22.833 ********* 2025-05-28 17:44:42.339627 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-28 17:44:42.339638 | orchestrator | 2025-05-28 17:44:42.339648 | orchestrator | TASK [Print report file information] ******************************************* 2025-05-28 17:44:42.339659 | orchestrator | Wednesday 28 May 2025 17:44:41 +0000 (0:00:01.173) 0:00:24.007 ********* 2025-05-28 17:44:42.339669 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2025-05-28 17:44:42.339681 | orchestrator |  "msg": [ 2025-05-28 17:44:42.339691 | orchestrator |  "Validator run completed.", 2025-05-28 17:44:42.339702 | orchestrator |  "You can find the report file here:", 2025-05-28 17:44:42.339713 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2025-05-28T17:44:18+00:00-report.json", 2025-05-28 17:44:42.339724 | orchestrator |  "on the following host:", 2025-05-28 17:44:42.339735 | orchestrator |  "testbed-manager" 2025-05-28 17:44:42.339746 | orchestrator |  ] 2025-05-28 17:44:42.339758 | orchestrator | } 2025-05-28 17:44:42.339769 | orchestrator | 2025-05-28 17:44:42.339779 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 17:44:42.339791 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2025-05-28 17:44:42.339804 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-05-28 17:44:42.339815 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-05-28 17:44:42.339825 | orchestrator | 2025-05-28 17:44:42.339836 | orchestrator | 2025-05-28 17:44:42.339847 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 17:44:42.339884 | orchestrator | Wednesday 28 May 2025 17:44:42 +0000 (0:00:00.539) 0:00:24.546 ********* 2025-05-28 17:44:42.339895 | orchestrator | =============================================================================== 2025-05-28 17:44:42.339906 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.48s 2025-05-28 17:44:42.339917 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.64s 2025-05-28 17:44:42.339928 | orchestrator | Aggregate test results step one ----------------------------------------- 1.55s 2025-05-28 17:44:42.339938 | orchestrator | Write report file ------------------------------------------------------- 1.17s 2025-05-28 17:44:42.339949 | orchestrator | Create report output directory ------------------------------------------ 0.93s 2025-05-28 17:44:42.339959 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.76s 2025-05-28 17:44:42.339970 | orchestrator | Aggregate test results step one ----------------------------------------- 0.63s 2025-05-28 17:44:42.339986 | orchestrator | Get timestamp for report file ------------------------------------------- 0.63s 2025-05-28 17:44:42.339997 | orchestrator | Set _mon_hostname fact -------------------------------------------------- 0.60s 2025-05-28 17:44:42.340008 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.56s 2025-05-28 17:44:42.340026 | orchestrator | Print report file information ------------------------------------------- 0.54s 2025-05-28 17:44:42.340036 | orchestrator | Prepare test data ------------------------------------------------------- 0.49s 2025-05-28 17:44:42.340047 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.49s 2025-05-28 17:44:42.340058 | orchestrator | Set test result to failed when count of containers is wrong ------------- 0.49s 2025-05-28 17:44:42.340068 | orchestrator | Get count of ceph-osd containers that are not running ------------------- 0.48s 2025-05-28 17:44:42.340079 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.47s 2025-05-28 17:44:42.340090 | orchestrator | Pass test if no sub test failed ----------------------------------------- 0.47s 2025-05-28 17:44:42.340100 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.46s 2025-05-28 17:44:42.340111 | orchestrator | Prepare test data ------------------------------------------------------- 0.45s 2025-05-28 17:44:42.340122 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.40s 2025-05-28 17:44:42.577895 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2025-05-28 17:44:42.586816 | orchestrator | + set -e 2025-05-28 17:44:42.586842 | orchestrator | + source /opt/manager-vars.sh 2025-05-28 17:44:42.586851 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-28 17:44:42.586888 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-28 17:44:42.586896 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-28 17:44:42.586903 | orchestrator | ++ CEPH_VERSION=reef 2025-05-28 17:44:42.586911 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-28 17:44:42.586920 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-28 17:44:42.586928 | orchestrator | ++ export MANAGER_VERSION=latest 2025-05-28 17:44:42.586936 | orchestrator | ++ MANAGER_VERSION=latest 2025-05-28 17:44:42.586944 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-28 17:44:42.586952 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-28 17:44:42.586959 | orchestrator | ++ export ARA=false 2025-05-28 17:44:42.587098 | orchestrator | ++ ARA=false 2025-05-28 17:44:42.587113 | orchestrator | ++ export TEMPEST=false 2025-05-28 17:44:42.587121 | orchestrator | ++ TEMPEST=false 2025-05-28 17:44:42.587128 | orchestrator | ++ export IS_ZUUL=true 2025-05-28 17:44:42.587136 | orchestrator | ++ IS_ZUUL=true 2025-05-28 17:44:42.587144 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.180 2025-05-28 17:44:42.587152 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.180 2025-05-28 17:44:42.587160 | orchestrator | ++ export EXTERNAL_API=false 2025-05-28 17:44:42.587284 | orchestrator | ++ EXTERNAL_API=false 2025-05-28 17:44:42.587297 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-28 17:44:42.587306 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-28 17:44:42.587314 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-28 17:44:42.587322 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-28 17:44:42.587330 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-28 17:44:42.587337 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-28 17:44:42.587350 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-05-28 17:44:42.587358 | orchestrator | + source /etc/os-release 2025-05-28 17:44:42.587366 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.2 LTS' 2025-05-28 17:44:42.587373 | orchestrator | ++ NAME=Ubuntu 2025-05-28 17:44:42.587381 | orchestrator | ++ VERSION_ID=24.04 2025-05-28 17:44:42.587389 | orchestrator | ++ VERSION='24.04.2 LTS (Noble Numbat)' 2025-05-28 17:44:42.587396 | orchestrator | ++ VERSION_CODENAME=noble 2025-05-28 17:44:42.587404 | orchestrator | ++ ID=ubuntu 2025-05-28 17:44:42.587411 | orchestrator | ++ ID_LIKE=debian 2025-05-28 17:44:42.587419 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2025-05-28 17:44:42.587427 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2025-05-28 17:44:42.587434 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2025-05-28 17:44:42.587442 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2025-05-28 17:44:42.587450 | orchestrator | ++ UBUNTU_CODENAME=noble 2025-05-28 17:44:42.587458 | orchestrator | ++ LOGO=ubuntu-logo 2025-05-28 17:44:42.587465 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2025-05-28 17:44:42.587474 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2025-05-28 17:44:42.587483 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-05-28 17:44:42.620658 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-05-28 17:45:02.452259 | orchestrator | 2025-05-28 17:45:02.452404 | orchestrator | # Status of Elasticsearch 2025-05-28 17:45:02.452422 | orchestrator | 2025-05-28 17:45:02.452435 | orchestrator | + pushd /opt/configuration/contrib 2025-05-28 17:45:02.452448 | orchestrator | + echo 2025-05-28 17:45:02.452460 | orchestrator | + echo '# Status of Elasticsearch' 2025-05-28 17:45:02.452471 | orchestrator | + echo 2025-05-28 17:45:02.452482 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2025-05-28 17:45:02.639477 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2025-05-28 17:45:02.639590 | orchestrator | 2025-05-28 17:45:02.639606 | orchestrator | # Status of MariaDB 2025-05-28 17:45:02.639619 | orchestrator | 2025-05-28 17:45:02.639631 | orchestrator | + echo 2025-05-28 17:45:02.639643 | orchestrator | + echo '# Status of MariaDB' 2025-05-28 17:45:02.639654 | orchestrator | + echo 2025-05-28 17:45:02.639664 | orchestrator | + MARIADB_USER=root_shard_0 2025-05-28 17:45:02.639677 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2025-05-28 17:45:02.705074 | orchestrator | Reading package lists... 2025-05-28 17:45:03.010410 | orchestrator | Building dependency tree... 2025-05-28 17:45:03.010646 | orchestrator | Reading state information... 2025-05-28 17:45:03.378128 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2025-05-28 17:45:03.378261 | orchestrator | bc set to manually installed. 2025-05-28 17:45:03.378277 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2025-05-28 17:45:04.041797 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2025-05-28 17:45:04.043062 | orchestrator | 2025-05-28 17:45:04.043109 | orchestrator | # Status of Prometheus 2025-05-28 17:45:04.043122 | orchestrator | 2025-05-28 17:45:04.043134 | orchestrator | + echo 2025-05-28 17:45:04.043147 | orchestrator | + echo '# Status of Prometheus' 2025-05-28 17:45:04.043159 | orchestrator | + echo 2025-05-28 17:45:04.043171 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2025-05-28 17:45:04.110349 | orchestrator | Unauthorized 2025-05-28 17:45:04.113759 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2025-05-28 17:45:04.180434 | orchestrator | Unauthorized 2025-05-28 17:45:04.184122 | orchestrator | 2025-05-28 17:45:04.184170 | orchestrator | # Status of RabbitMQ 2025-05-28 17:45:04.184184 | orchestrator | 2025-05-28 17:45:04.184196 | orchestrator | + echo 2025-05-28 17:45:04.184207 | orchestrator | + echo '# Status of RabbitMQ' 2025-05-28 17:45:04.184218 | orchestrator | + echo 2025-05-28 17:45:04.184230 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2025-05-28 17:45:04.628043 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2025-05-28 17:45:04.636217 | orchestrator | 2025-05-28 17:45:04.636287 | orchestrator | # Status of Redis 2025-05-28 17:45:04.636302 | orchestrator | 2025-05-28 17:45:04.636313 | orchestrator | + echo 2025-05-28 17:45:04.636325 | orchestrator | + echo '# Status of Redis' 2025-05-28 17:45:04.636338 | orchestrator | + echo 2025-05-28 17:45:04.636351 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2025-05-28 17:45:04.639812 | orchestrator | TCP OK - 0.001 second response time on 192.168.16.10 port 6379|time=0.001456s;;;0.000000;10.000000 2025-05-28 17:45:04.640521 | orchestrator | 2025-05-28 17:45:04.640589 | orchestrator | # Create backup of MariaDB database 2025-05-28 17:45:04.640601 | orchestrator | 2025-05-28 17:45:04.640609 | orchestrator | + popd 2025-05-28 17:45:04.640617 | orchestrator | + echo 2025-05-28 17:45:04.640625 | orchestrator | + echo '# Create backup of MariaDB database' 2025-05-28 17:45:04.640632 | orchestrator | + echo 2025-05-28 17:45:04.640641 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2025-05-28 17:45:06.375959 | orchestrator | 2025-05-28 17:45:06 | INFO  | Task 9c7c4fb9-a5ea-4e94-8bcf-4ab20353fb7a (mariadb_backup) was prepared for execution. 2025-05-28 17:45:06.376083 | orchestrator | 2025-05-28 17:45:06 | INFO  | It takes a moment until task 9c7c4fb9-a5ea-4e94-8bcf-4ab20353fb7a (mariadb_backup) has been started and output is visible here. 2025-05-28 17:45:10.162420 | orchestrator | 2025-05-28 17:45:10.162537 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-28 17:45:10.164044 | orchestrator | 2025-05-28 17:45:10.165803 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-28 17:45:10.166545 | orchestrator | Wednesday 28 May 2025 17:45:10 +0000 (0:00:00.175) 0:00:00.175 ********* 2025-05-28 17:45:10.339239 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:45:10.456117 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:45:10.458121 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:45:10.458153 | orchestrator | 2025-05-28 17:45:10.462288 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-28 17:45:10.462331 | orchestrator | Wednesday 28 May 2025 17:45:10 +0000 (0:00:00.299) 0:00:00.474 ********* 2025-05-28 17:45:11.048406 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-05-28 17:45:11.048495 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-05-28 17:45:11.051223 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-05-28 17:45:11.051242 | orchestrator | 2025-05-28 17:45:11.052043 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-05-28 17:45:11.054359 | orchestrator | 2025-05-28 17:45:11.054377 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-05-28 17:45:11.054381 | orchestrator | Wednesday 28 May 2025 17:45:11 +0000 (0:00:00.592) 0:00:01.067 ********* 2025-05-28 17:45:11.423457 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-28 17:45:11.424496 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-05-28 17:45:11.428378 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-05-28 17:45:11.428414 | orchestrator | 2025-05-28 17:45:11.428427 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-28 17:45:11.429231 | orchestrator | Wednesday 28 May 2025 17:45:11 +0000 (0:00:00.374) 0:00:01.441 ********* 2025-05-28 17:45:11.955870 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:45:11.956390 | orchestrator | 2025-05-28 17:45:11.958983 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2025-05-28 17:45:11.959943 | orchestrator | Wednesday 28 May 2025 17:45:11 +0000 (0:00:00.534) 0:00:01.975 ********* 2025-05-28 17:45:15.037721 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:45:15.041751 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:45:15.043993 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:45:15.045243 | orchestrator | 2025-05-28 17:45:15.046207 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2025-05-28 17:45:15.047492 | orchestrator | Wednesday 28 May 2025 17:45:15 +0000 (0:00:03.077) 0:00:05.053 ********* 2025-05-28 17:46:00.471580 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-05-28 17:46:00.471707 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2025-05-28 17:46:00.471725 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-05-28 17:46:00.471739 | orchestrator | mariadb_bootstrap_restart 2025-05-28 17:46:00.550951 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:46:00.552915 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:46:00.556694 | orchestrator | changed: [testbed-node-0] 2025-05-28 17:46:00.557628 | orchestrator | 2025-05-28 17:46:00.558298 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-05-28 17:46:00.558885 | orchestrator | skipping: no hosts matched 2025-05-28 17:46:00.559553 | orchestrator | 2025-05-28 17:46:00.560251 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-05-28 17:46:00.560746 | orchestrator | skipping: no hosts matched 2025-05-28 17:46:00.561471 | orchestrator | 2025-05-28 17:46:00.561950 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-05-28 17:46:00.562403 | orchestrator | skipping: no hosts matched 2025-05-28 17:46:00.562849 | orchestrator | 2025-05-28 17:46:00.563345 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-05-28 17:46:00.563724 | orchestrator | 2025-05-28 17:46:00.564425 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-05-28 17:46:00.565944 | orchestrator | Wednesday 28 May 2025 17:46:00 +0000 (0:00:45.518) 0:00:50.571 ********* 2025-05-28 17:46:00.734983 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:46:00.855006 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:46:00.855167 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:46:00.855611 | orchestrator | 2025-05-28 17:46:00.856370 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-05-28 17:46:00.857001 | orchestrator | Wednesday 28 May 2025 17:46:00 +0000 (0:00:00.303) 0:00:50.874 ********* 2025-05-28 17:46:01.209228 | orchestrator | skipping: [testbed-node-0] 2025-05-28 17:46:01.254510 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:46:01.255110 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:46:01.256513 | orchestrator | 2025-05-28 17:46:01.257104 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 17:46:01.257715 | orchestrator | 2025-05-28 17:46:01 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-28 17:46:01.258137 | orchestrator | 2025-05-28 17:46:01 | INFO  | Please wait and do not abort execution. 2025-05-28 17:46:01.259133 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 17:46:01.259823 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-28 17:46:01.260627 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-28 17:46:01.261352 | orchestrator | 2025-05-28 17:46:01.261936 | orchestrator | 2025-05-28 17:46:01.262436 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 17:46:01.263072 | orchestrator | Wednesday 28 May 2025 17:46:01 +0000 (0:00:00.399) 0:00:51.274 ********* 2025-05-28 17:46:01.263485 | orchestrator | =============================================================================== 2025-05-28 17:46:01.264213 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 45.52s 2025-05-28 17:46:01.265078 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.08s 2025-05-28 17:46:01.265840 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.59s 2025-05-28 17:46:01.266226 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.53s 2025-05-28 17:46:01.266627 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.40s 2025-05-28 17:46:01.267301 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.37s 2025-05-28 17:46:01.267533 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.30s 2025-05-28 17:46:01.267967 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.30s 2025-05-28 17:46:01.768316 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=incremental 2025-05-28 17:46:03.551440 | orchestrator | 2025-05-28 17:46:03 | INFO  | Task d684744e-287d-43f5-b82d-0db13466f3f0 (mariadb_backup) was prepared for execution. 2025-05-28 17:46:03.551548 | orchestrator | 2025-05-28 17:46:03 | INFO  | It takes a moment until task d684744e-287d-43f5-b82d-0db13466f3f0 (mariadb_backup) has been started and output is visible here. 2025-05-28 17:46:07.386856 | orchestrator | 2025-05-28 17:46:07.388850 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-28 17:46:07.389368 | orchestrator | 2025-05-28 17:46:07.391018 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-28 17:46:07.393152 | orchestrator | Wednesday 28 May 2025 17:46:07 +0000 (0:00:00.174) 0:00:00.174 ********* 2025-05-28 17:46:07.564752 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:46:07.694338 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:46:07.697271 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:46:07.697895 | orchestrator | 2025-05-28 17:46:07.698675 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-28 17:46:07.699480 | orchestrator | Wednesday 28 May 2025 17:46:07 +0000 (0:00:00.311) 0:00:00.486 ********* 2025-05-28 17:46:08.257398 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-05-28 17:46:08.258929 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-05-28 17:46:08.259265 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-05-28 17:46:08.260561 | orchestrator | 2025-05-28 17:46:08.261441 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-05-28 17:46:08.262665 | orchestrator | 2025-05-28 17:46:08.263568 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-05-28 17:46:08.264297 | orchestrator | Wednesday 28 May 2025 17:46:08 +0000 (0:00:00.565) 0:00:01.051 ********* 2025-05-28 17:46:08.651753 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-28 17:46:08.652655 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-05-28 17:46:08.656809 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-05-28 17:46:08.657710 | orchestrator | 2025-05-28 17:46:08.659955 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-28 17:46:08.660352 | orchestrator | Wednesday 28 May 2025 17:46:08 +0000 (0:00:00.393) 0:00:01.444 ********* 2025-05-28 17:46:09.169494 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:46:09.169998 | orchestrator | 2025-05-28 17:46:09.173829 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2025-05-28 17:46:09.173869 | orchestrator | Wednesday 28 May 2025 17:46:09 +0000 (0:00:00.517) 0:00:01.962 ********* 2025-05-28 17:46:12.287001 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:46:12.287771 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:46:12.290540 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:46:12.290577 | orchestrator | 2025-05-28 17:46:12.290592 | orchestrator | TASK [mariadb : Taking incremental database backup via Mariabackup] ************ 2025-05-28 17:46:12.290605 | orchestrator | Wednesday 28 May 2025 17:46:12 +0000 (0:00:03.114) 0:00:05.077 ********* 2025-05-28 17:46:16.701509 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:46:16.702283 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:46:16.704425 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "Container exited with non-zero return code 139", "rc": 139, "stderr": "INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json\nINFO:__main__:Validating config file\nINFO:__main__:Kolla config strategy set to: COPY_ALWAYS\nINFO:__main__:Copying /etc/mysql/my.cnf to /etc/kolla/defaults/etc/mysql/my.cnf\nINFO:__main__:Copying permissions from /etc/mysql/my.cnf onto /etc/kolla/defaults/etc/mysql/my.cnf\nINFO:__main__:Copying service configuration files\nINFO:__main__:Deleting /etc/mysql/my.cnf\nINFO:__main__:Copying /var/lib/kolla/config_files/my.cnf to /etc/mysql/my.cnf\nINFO:__main__:Setting permission for /etc/mysql/my.cnf\nINFO:__main__:Writing out command to execute\nINFO:__main__:Setting permission for /var/log/kolla/mariadb\nINFO:__main__:Setting permission for /backup\n[00] 2025-05-28 17:46:16 Connecting to MariaDB server host: 192.168.16.11, user: backup_shard_0, password: set, port: 3306, socket: not set\n[00] 2025-05-28 17:46:16 Using server version 10.11.13-MariaDB-deb12-log\nmariabackup based on MariaDB server 10.11.13-MariaDB debian-linux-gnu (x86_64)\n[00] 2025-05-28 17:46:16 incremental backup from 0 is enabled.\n[00] 2025-05-28 17:46:16 uses posix_fadvise().\n[00] 2025-05-28 17:46:16 cd to /var/lib/mysql/\n[00] 2025-05-28 17:46:16 open files limit requested 0, set to 1048576\n[00] 2025-05-28 17:46:16 mariabackup: using the following InnoDB configuration:\n[00] 2025-05-28 17:46:16 innodb_data_home_dir = \n[00] 2025-05-28 17:46:16 innodb_data_file_path = ibdata1:12M:autoextend\n[00] 2025-05-28 17:46:16 innodb_log_group_home_dir = ./\n[00] 2025-05-28 17:46:16 InnoDB: Using liburing\n2025-05-28 17:46:16 0 [Note] InnoDB: Number of transaction pools: 1\nmariabackup: io_uring_queue_init() failed with EPERM: sysctl kernel.io_uring_disabled has the value 2, or 1 and the user of the process is not a member of sysctl kernel.io_uring_group. (see man 2 io_uring_setup).\n2025-05-28 17:46:16 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF\n2025-05-28 17:46:16 0 [Note] InnoDB: Memory-mapped log (block size=512 bytes)\n250528 17:46:16 [ERROR] mariabackup got signal 11 ;\nSorry, we probably made a mistake, and this is a bug.\n\nYour assistance in bug reporting will enable us to fix this for the next release.\nTo report this bug, see https://mariadb.com/kb/en/reporting-bugs about how to report\na bug on https://jira.mariadb.org/.\n\nPlease include the information from the server start above, to the end of the\ninformation below.\n\nServer version: 10.11.13-MariaDB-deb12 source revision: 8fb09426b98583916ccfd4f8c49741adc115bac3\n\nThe information page at https://mariadb.com/kb/en/how-to-produce-a-full-stack-trace-for-mariadbd/\ncontains instructions to obtain a better version of the backtrace below.\nFollowing these instructions will help MariaDB developers provide a fix quicker.\n\nAttempting backtrace. Include this in the bug report.\n(note: Retrieving this information may fail)\n\nThread pointer: 0x0\nstack_bottom = 0x0 thread_stack 0x49000\nPrinting to addr2line failed\nmariabackup(my_print_stacktrace+0x2e)[0x5f7de2f403ae]\nmariabackup(handle_fatal_signal+0x229)[0x5f7de2a636d9]\n/lib/x86_64-linux-gnu/libc.so.6(+0x3c050)[0x768a9b25b050]\nmariabackup(server_mysql_fetch_row+0x14)[0x5f7de26af474]\nmariabackup(+0x76ca87)[0x5f7de2681a87]\nmariabackup(+0x75f37a)[0x5f7de267437a]\nmariabackup(main+0x163)[0x5f7de2619053]\n/lib/x86_64-linux-gnu/libc.so.6(+0x2724a)[0x768a9b24624a]\n/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x85)[0x768a9b246305]\nmariabackup(_start+0x21)[0x5f7de265e161]\nWriting a core file...\nWorking directory at /var/lib/mysql\nResource Limits (excludes unlimited resources):\nLimit Soft Limit Hard Limit Units \nMax stack size 8388608 unlimited bytes \nMax open files 1048576 1048576 files \nMax locked memory 8388608 8388608 bytes \nMax pending signals 128063 128063 signals \nMax msgqueue size 819200 819200 bytes \nMax nice priority 0 0 \nMax realtime priority 0 0 \nCore pattern: |/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -- %E\n\nKernel version: Linux version 6.11.0-26-generic (buildd@lcy02-amd64-074) (x86_64-linux-gnu-gcc-13 (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0, GNU ld (GNU Binutils for Ubuntu) 2.42) #26~24.04.1-Ubuntu SMP PREEMPT_DYNAMIC Thu Apr 17 19:20:47 UTC 2\n\n/usr/local/bin/kolla_mariadb_backup_replica.sh: line 36: 44 Segmentation fault (core dumped) mariabackup --defaults-file=\"${REPLICA_MY_CNF}\" --backup --stream=mbstream --incremental-history-name=\"${LAST_FULL_DATE}\" --history=\"${LAST_FULL_DATE}\"\n 45 Done | gzip > \"${BACKUP_DIR}/incremental-$(date +%H)-mysqlbackup-${LAST_FULL_DATE}.qp.mbc.mbs.gz\"\n", "stderr_lines": ["INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json", "INFO:__main__:Validating config file", "INFO:__main__:Kolla config strategy set to: COPY_ALWAYS", "INFO:__main__:Copying /etc/mysql/my.cnf to /etc/kolla/defaults/etc/mysql/my.cnf", "INFO:__main__:Copying permissions from /etc/mysql/my.cnf onto /etc/kolla/defaults/etc/mysql/my.cnf", "INFO:__main__:Copying service configuration files", "INFO:__main__:Deleting /etc/mysql/my.cnf", "INFO:__main__:Copying /var/lib/kolla/config_files/my.cnf to /etc/mysql/my.cnf", "INFO:__main__:Setting permission for /etc/mysql/my.cnf", "INFO:__main__:Writing out command to execute", "INFO:__main__:Setting permission for /var/log/kolla/mariadb", "INFO:__main__:Setting permission for /backup", "[00] 2025-05-28 17:46:16 Connecting to MariaDB server host: 192.168.16.11, user: backup_shard_0, password: set, port: 3306, socket: not set", "[00] 2025-05-28 17:46:16 Using server version 10.11.13-MariaDB-deb12-log", "mariabackup based on MariaDB server 10.11.13-MariaDB debian-linux-gnu (x86_64)", "[00] 2025-05-28 17:46:16 incremental backup from 0 is enabled.", "[00] 2025-05-28 17:46:16 uses posix_fadvise().", "[00] 2025-05-28 17:46:16 cd to /var/lib/mysql/", "[00] 2025-05-28 17:46:16 open files limit requested 0, set to 1048576", "[00] 2025-05-28 17:46:16 mariabackup: using the following InnoDB configuration:", "[00] 2025-05-28 17:46:16 innodb_data_home_dir = ", "[00] 2025-05-28 17:46:16 innodb_data_file_path = ibdata1:12M:autoextend", "[00] 2025-05-28 17:46:16 innodb_log_group_home_dir = ./", "[00] 2025-05-28 17:46:16 InnoDB: Using liburing", "2025-05-28 17:46:16 0 [Note] InnoDB: Number of transaction pools: 1", "mariabackup: io_uring_queue_init() failed with EPERM: sysctl kernel.io_uring_disabled has the value 2, or 1 and the user of the process is not a member of sysctl kernel.io_uring_group. (see man 2 io_uring_setup).", "2025-05-28 17:46:16 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF", "2025-05-28 17:46:16 0 [Note] InnoDB: Memory-mapped log (block size=512 bytes)", "250528 17:46:16 [ERROR] mariabackup got signal 11 ;", "Sorry, we probably made a mistake, and this is a bug.", "", "Your assistance in bug reporting will enable us to fix this for the next release.", "To report this bug, see https://mariadb.com/kb/en/reporting-bugs about how to report", "a bug on https://jira.mariadb.org/.", "", "Please include the information from the server start above, to the end of the", "information below.", "", "Server version: 10.11.13-MariaDB-deb12 source revision: 8fb09426b98583916ccfd4f8c49741adc115bac3", "", "The information page at https://mariadb.com/kb/en/how-to-produce-a-full-stack-trace-for-mariadbd/", "contains instructions to obtain a better version of the backtrace below.", "Following these instructions will help MariaDB developers provide a fix quicker.", "", "Attempting backtrace. Include this in the bug report.", "(note: Retrieving this information may fail)", "", "Thread pointer: 0x0", "stack_bottom = 0x0 thread_stack 0x49000", "Printing to addr2line failed", "mariabackup(my_print_stacktrace+0x2e)[0x5f7de2f403ae]", "mariabackup(handle_fatal_signal+0x229)[0x5f7de2a636d9]", "/lib/x86_64-linux-gnu/libc.so.6(+0x3c050)[0x768a9b25b050]", "mariabackup(server_mysql_fetch_row+0x14)[0x5f7de26af474]", "mariabackup(+0x76ca87)[0x5f7de2681a87]", "mariabackup(+0x75f37a)[0x5f7de267437a]", "mariabackup(main+0x163)[0x5f7de2619053]", "/lib/x86_64-linux-gnu/libc.so.6(+0x2724a)[0x768a9b24624a]", "/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x85)[0x768a9b246305]", "mariabackup(_start+0x21)[0x5f7de265e161]", "Writing a core file...", "Working directory at /var/lib/mysql", "Resource Limits (excludes unlimited resources):", "Limit Soft Limit Hard Limit Units ", "Max stack size 8388608 unlimited bytes ", "Max open files 1048576 1048576 files ", "Max locked memory 8388608 8388608 bytes ", "Max pending signals 128063 128063 signals ", "Max msgqueue size 819200 819200 bytes ", "Max nice priority 0 0 ", "Max realtime priority 0 0 ", "Core pattern: |/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -- %E", "", "Kernel version: Linux version 6.11.0-26-generic (buildd@lcy02-amd64-074) (x86_64-linux-gnu-gcc-13 (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0, GNU ld (GNU Binutils for Ubuntu) 2.42) #26~24.04.1-Ubuntu SMP PREEMPT_DYNAMIC Thu Apr 17 19:20:47 UTC 2", "", "/usr/local/bin/kolla_mariadb_backup_replica.sh: line 36: 44 Segmentation fault (core dumped) mariabackup --defaults-file=\"${REPLICA_MY_CNF}\" --backup --stream=mbstream --incremental-history-name=\"${LAST_FULL_DATE}\" --history=\"${LAST_FULL_DATE}\"", " 45 Done | gzip > \"${BACKUP_DIR}/incremental-$(date +%H)-mysqlbackup-${LAST_FULL_DATE}.qp.mbc.mbs.gz\""], "stdout": "Taking an incremental backup\n", "stdout_lines": ["Taking an incremental backup"]} 2025-05-28 17:46:16.862406 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-05-28 17:46:16.863743 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2025-05-28 17:46:16.864849 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-05-28 17:46:16.866089 | orchestrator | mariadb_bootstrap_restart 2025-05-28 17:46:16.943406 | orchestrator | 2025-05-28 17:46:16.943519 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-05-28 17:46:16.950751 | orchestrator | skipping: no hosts matched 2025-05-28 17:46:16.952305 | orchestrator | 2025-05-28 17:46:16.954227 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-05-28 17:46:16.954676 | orchestrator | skipping: no hosts matched 2025-05-28 17:46:16.955990 | orchestrator | 2025-05-28 17:46:16.956024 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-05-28 17:46:16.956962 | orchestrator | skipping: no hosts matched 2025-05-28 17:46:16.957257 | orchestrator | 2025-05-28 17:46:16.957939 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-05-28 17:46:16.958462 | orchestrator | 2025-05-28 17:46:16.958850 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-05-28 17:46:16.959239 | orchestrator | Wednesday 28 May 2025 17:46:16 +0000 (0:00:04.657) 0:00:09.734 ********* 2025-05-28 17:46:17.158744 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:46:17.159454 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:46:17.160458 | orchestrator | 2025-05-28 17:46:17.161198 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-05-28 17:46:17.162169 | orchestrator | Wednesday 28 May 2025 17:46:17 +0000 (0:00:00.217) 0:00:09.952 ********* 2025-05-28 17:46:17.290013 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:46:17.293154 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:46:17.294469 | orchestrator | 2025-05-28 17:46:17.295337 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 17:46:17.296456 | orchestrator | 2025-05-28 17:46:17 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-28 17:46:17.297042 | orchestrator | 2025-05-28 17:46:17 | INFO  | Please wait and do not abort execution. 2025-05-28 17:46:17.298985 | orchestrator | testbed-node-0 : ok=5  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-05-28 17:46:17.299903 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-28 17:46:17.300925 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-28 17:46:17.301671 | orchestrator | 2025-05-28 17:46:17.302539 | orchestrator | 2025-05-28 17:46:17.303385 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 17:46:17.304171 | orchestrator | Wednesday 28 May 2025 17:46:17 +0000 (0:00:00.132) 0:00:10.084 ********* 2025-05-28 17:46:17.304775 | orchestrator | =============================================================================== 2025-05-28 17:46:17.306180 | orchestrator | mariadb : Taking incremental database backup via Mariabackup ------------ 4.66s 2025-05-28 17:46:17.306815 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.11s 2025-05-28 17:46:17.307772 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.57s 2025-05-28 17:46:17.308247 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.52s 2025-05-28 17:46:17.309205 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.39s 2025-05-28 17:46:17.310174 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2025-05-28 17:46:17.310646 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.22s 2025-05-28 17:46:17.311470 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.13s 2025-05-28 17:46:17.650898 | orchestrator | 2025-05-28 17:46:17 | INFO  | Task 827dc199-7910-4d3e-a26f-7785f3bd399a (mariadb_backup) was prepared for execution. 2025-05-28 17:46:17.651000 | orchestrator | 2025-05-28 17:46:17 | INFO  | It takes a moment until task 827dc199-7910-4d3e-a26f-7785f3bd399a (mariadb_backup) has been started and output is visible here. 2025-05-28 17:46:21.486120 | orchestrator | 2025-05-28 17:46:21.486229 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-28 17:46:21.486245 | orchestrator | 2025-05-28 17:46:21.487519 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-28 17:46:21.489773 | orchestrator | Wednesday 28 May 2025 17:46:21 +0000 (0:00:00.178) 0:00:00.178 ********* 2025-05-28 17:46:21.673139 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:46:21.789605 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:46:21.790531 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:46:21.790836 | orchestrator | 2025-05-28 17:46:21.794523 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-28 17:46:21.794577 | orchestrator | Wednesday 28 May 2025 17:46:21 +0000 (0:00:00.309) 0:00:00.488 ********* 2025-05-28 17:46:22.336309 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-05-28 17:46:22.337246 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-05-28 17:46:22.339324 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-05-28 17:46:22.339662 | orchestrator | 2025-05-28 17:46:22.340668 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-05-28 17:46:22.342196 | orchestrator | 2025-05-28 17:46:22.342934 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-05-28 17:46:22.343629 | orchestrator | Wednesday 28 May 2025 17:46:22 +0000 (0:00:00.546) 0:00:01.035 ********* 2025-05-28 17:46:22.733579 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-28 17:46:22.735510 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-05-28 17:46:22.736872 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-05-28 17:46:22.738407 | orchestrator | 2025-05-28 17:46:22.740146 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-28 17:46:22.741225 | orchestrator | Wednesday 28 May 2025 17:46:22 +0000 (0:00:00.395) 0:00:01.430 ********* 2025-05-28 17:46:23.254982 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 17:46:23.255554 | orchestrator | 2025-05-28 17:46:23.256388 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2025-05-28 17:46:23.257217 | orchestrator | Wednesday 28 May 2025 17:46:23 +0000 (0:00:00.523) 0:00:01.954 ********* 2025-05-28 17:46:26.523196 | orchestrator | ok: [testbed-node-0] 2025-05-28 17:46:26.524639 | orchestrator | ok: [testbed-node-1] 2025-05-28 17:46:26.524680 | orchestrator | ok: [testbed-node-2] 2025-05-28 17:46:26.524702 | orchestrator | 2025-05-28 17:46:26.524724 | orchestrator | TASK [mariadb : Taking incremental database backup via Mariabackup] ************ 2025-05-28 17:46:26.526626 | orchestrator | Wednesday 28 May 2025 17:46:26 +0000 (0:00:03.261) 0:00:05.215 ********* 2025-05-28 17:46:31.099793 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:46:31.099942 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:46:31.101755 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "Container exited with non-zero return code 139", "rc": 139, "stderr": "INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json\nINFO:__main__:Validating config file\nINFO:__main__:Kolla config strategy set to: COPY_ALWAYS\nINFO:__main__:Copying /etc/mysql/my.cnf to /etc/kolla/defaults/etc/mysql/my.cnf\nINFO:__main__:Copying permissions from /etc/mysql/my.cnf onto /etc/kolla/defaults/etc/mysql/my.cnf\nINFO:__main__:Copying service configuration files\nINFO:__main__:Deleting /etc/mysql/my.cnf\nINFO:__main__:Copying /var/lib/kolla/config_files/my.cnf to /etc/mysql/my.cnf\nINFO:__main__:Setting permission for /etc/mysql/my.cnf\nINFO:__main__:Writing out command to execute\nINFO:__main__:Setting permission for /var/log/kolla/mariadb\nINFO:__main__:Setting permission for /backup\n[00] 2025-05-28 17:46:30 Connecting to MariaDB server host: 192.168.16.11, user: backup_shard_0, password: set, port: 3306, socket: not set\n[00] 2025-05-28 17:46:30 Using server version 10.11.13-MariaDB-deb12-log\nmariabackup based on MariaDB server 10.11.13-MariaDB debian-linux-gnu (x86_64)\n[00] 2025-05-28 17:46:30 incremental backup from 0 is enabled.\n[00] 2025-05-28 17:46:30 uses posix_fadvise().\n[00] 2025-05-28 17:46:30 cd to /var/lib/mysql/\n[00] 2025-05-28 17:46:30 open files limit requested 0, set to 1048576\n[00] 2025-05-28 17:46:30 mariabackup: using the following InnoDB configuration:\n[00] 2025-05-28 17:46:30 innodb_data_home_dir = \n[00] 2025-05-28 17:46:30 innodb_data_file_path = ibdata1:12M:autoextend\n[00] 2025-05-28 17:46:30 innodb_log_group_home_dir = ./\n[00] 2025-05-28 17:46:30 InnoDB: Using liburing\n2025-05-28 17:46:30 0 [Note] InnoDB: Number of transaction pools: 1\nmariabackup: io_uring_queue_init() failed with EPERM: sysctl kernel.io_uring_disabled has the value 2, or 1 and the user of the process is not a member of sysctl kernel.io_uring_group. (see man 2 io_uring_setup).\n2025-05-28 17:46:30 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF\n2025-05-28 17:46:30 0 [Note] InnoDB: Memory-mapped log (block size=512 bytes)\n250528 17:46:30 [ERROR] mariabackup got signal 11 ;\nSorry, we probably made a mistake, and this is a bug.\n\nYour assistance in bug reporting will enable us to fix this for the next release.\nTo report this bug, see https://mariadb.com/kb/en/reporting-bugs about how to report\na bug on https://jira.mariadb.org/.\n\nPlease include the information from the server start above, to the end of the\ninformation below.\n\nServer version: 10.11.13-MariaDB-deb12 source revision: 8fb09426b98583916ccfd4f8c49741adc115bac3\n\nThe information page at https://mariadb.com/kb/en/how-to-produce-a-full-stack-trace-for-mariadbd/\ncontains instructions to obtain a better version of the backtrace below.\nFollowing these instructions will help MariaDB developers provide a fix quicker.\n\nAttempting backtrace. Include this in the bug report.\n(note: Retrieving this information may fail)\n\nThread pointer: 0x0\nstack_bottom = 0x0 thread_stack 0x49000\nPrinting to addr2line failed\nmariabackup(my_print_stacktrace+0x2e)[0x61321e5033ae]\nmariabackup(handle_fatal_signal+0x229)[0x61321e0266d9]\n/lib/x86_64-linux-gnu/libc.so.6(+0x3c050)[0x749977cdb050]\nmariabackup(server_mysql_fetch_row+0x14)[0x61321dc72474]\nmariabackup(+0x76ca87)[0x61321dc44a87]\nmariabackup(+0x75f37a)[0x61321dc3737a]\nmariabackup(main+0x163)[0x61321dbdc053]\n/lib/x86_64-linux-gnu/libc.so.6(+0x2724a)[0x749977cc624a]\n/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x85)[0x749977cc6305]\nmariabackup(_start+0x21)[0x61321dc21161]\nWriting a core file...\nWorking directory at /var/lib/mysql\nResource Limits (excludes unlimited resources):\nLimit Soft Limit Hard Limit Units \nMax stack size 8388608 unlimited bytes \nMax open files 1048576 1048576 files \nMax locked memory 8388608 8388608 bytes \nMax pending signals 128063 128063 signals \nMax msgqueue size 819200 819200 bytes \nMax nice priority 0 0 \nMax realtime priority 0 0 \nCore pattern: |/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -- %E\n\nKernel version: Linux version 6.11.0-26-generic (buildd@lcy02-amd64-074) (x86_64-linux-gnu-gcc-13 (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0, GNU ld (GNU Binutils for Ubuntu) 2.42) #26~24.04.1-Ubuntu SMP PREEMPT_DYNAMIC Thu Apr 17 19:20:47 UTC 2\n\n/usr/local/bin/kolla_mariadb_backup_replica.sh: line 36: 44 Segmentation fault (core dumped) mariabackup --defaults-file=\"${REPLICA_MY_CNF}\" --backup --stream=mbstream --incremental-history-name=\"${LAST_FULL_DATE}\" --history=\"${LAST_FULL_DATE}\"\n 45 Done | gzip > \"${BACKUP_DIR}/incremental-$(date +%H)-mysqlbackup-${LAST_FULL_DATE}.qp.mbc.mbs.gz\"\n", "stderr_lines": ["INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json", "INFO:__main__:Validating config file", "INFO:__main__:Kolla config strategy set to: COPY_ALWAYS", "INFO:__main__:Copying /etc/mysql/my.cnf to /etc/kolla/defaults/etc/mysql/my.cnf", "INFO:__main__:Copying permissions from /etc/mysql/my.cnf onto /etc/kolla/defaults/etc/mysql/my.cnf", "INFO:__main__:Copying service configuration files", "INFO:__main__:Deleting /etc/mysql/my.cnf", "INFO:__main__:Copying /var/lib/kolla/config_files/my.cnf to /etc/mysql/my.cnf", "INFO:__main__:Setting permission for /etc/mysql/my.cnf", "INFO:__main__:Writing out command to execute", "INFO:__main__:Setting permission for /var/log/kolla/mariadb", "INFO:__main__:Setting permission for /backup", "[00] 2025-05-28 17:46:30 Connecting to MariaDB server host: 192.168.16.11, user: backup_shard_0, password: set, port: 3306, socket: not set", "[00] 2025-05-28 17:46:30 Using server version 10.11.13-MariaDB-deb12-log", "mariabackup based on MariaDB server 10.11.13-MariaDB debian-linux-gnu (x86_64)", "[00] 2025-05-28 17:46:30 incremental backup from 0 is enabled.", "[00] 2025-05-28 17:46:30 uses posix_fadvise().", "[00] 2025-05-28 17:46:30 cd to /var/lib/mysql/", "[00] 2025-05-28 17:46:30 open files limit requested 0, set to 1048576", "[00] 2025-05-28 17:46:30 mariabackup: using the following InnoDB configuration:", "[00] 2025-05-28 17:46:30 innodb_data_home_dir = ", "[00] 2025-05-28 17:46:30 innodb_data_file_path = ibdata1:12M:autoextend", "[00] 2025-05-28 17:46:30 innodb_log_group_home_dir = ./", "[00] 2025-05-28 17:46:30 InnoDB: Using liburing", "2025-05-28 17:46:30 0 [Note] InnoDB: Number of transaction pools: 1", "mariabackup: io_uring_queue_init() failed with EPERM: sysctl kernel.io_uring_disabled has the value 2, or 1 and the user of the process is not a member of sysctl kernel.io_uring_group. (see man 2 io_uring_setup).", "2025-05-28 17:46:30 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF", "2025-05-28 17:46:30 0 [Note] InnoDB: Memory-mapped log (block size=512 bytes)", "250528 17:46:30 [ERROR] mariabackup got signal 11 ;", "Sorry, we probably made a mistake, and this is a bug.", "", "Your assistance in bug reporting will enable us to fix this for the next release.", "To report this bug, see https://mariadb.com/kb/en/reporting-bugs about how to report", "a bug on https://jira.mariadb.org/.", "", "Please include the information from the server start above, to the end of the", "information below.", "", "Server version: 10.11.13-MariaDB-deb12 source revision: 8fb09426b98583916ccfd4f8c49741adc115bac3", "", "The information page at https://mariadb.com/kb/en/how-to-produce-a-full-stack-trace-for-mariadbd/", "contains instructions to obtain a better version of the backtrace below.", "Following these instructions will help MariaDB developers provide a fix quicker.", "", "Attempting backtrace. Include this in the bug report.", "(note: Retrieving this information may fail)", "", "Thread pointer: 0x0", "stack_bottom = 0x0 thread_stack 0x49000", "Printing to addr2line failed", "mariabackup(my_print_stacktrace+0x2e)[0x61321e5033ae]", "mariabackup(handle_fatal_signal+0x229)[0x61321e0266d9]", "/lib/x86_64-linux-gnu/libc.so.6(+0x3c050)[0x749977cdb050]", "mariabackup(server_mysql_fetch_row+0x14)[0x61321dc72474]", "mariabackup(+0x76ca87)[0x61321dc44a87]", "mariabackup(+0x75f37a)[0x61321dc3737a]", "mariabackup(main+0x163)[0x61321dbdc053]", "/lib/x86_64-linux-gnu/libc.so.6(+0x2724a)[0x749977cc624a]", "/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x85)[0x749977cc6305]", "mariabackup(_start+0x21)[0x61321dc21161]", "Writing a core file...", "Working directory at /var/lib/mysql", "Resource Limits (excludes unlimited resources):", "Limit Soft Limit Hard Limit Units ", "Max stack size 8388608 unlimited bytes ", "Max open files 1048576 1048576 files ", "Max locked memory 8388608 8388608 bytes ", "Max pending signals 128063 128063 signals ", "Max msgqueue size 819200 819200 bytes ", "Max nice priority 0 0 ", "Max realtime priority 0 0 ", "Core pattern: |/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -- %E", "", "Kernel version: Linux version 6.11.0-26-generic (buildd@lcy02-amd64-074) (x86_64-linux-gnu-gcc-13 (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0, GNU ld (GNU Binutils for Ubuntu) 2.42) #26~24.04.1-Ubuntu SMP PREEMPT_DYNAMIC Thu Apr 17 19:20:47 UTC 2", "", "/usr/local/bin/kolla_mariadb_backup_replica.sh: line 36: 44 Segmentation fault (core dumped) mariabackup --defaults-file=\"${REPLICA_MY_CNF}\" --backup --stream=mbstream --incremental-history-name=\"${LAST_FULL_DATE}\" --history=\"${LAST_FULL_DATE}\"", " 45 Done | gzip > \"${BACKUP_DIR}/incremental-$(date +%H)-mysqlbackup-${LAST_FULL_DATE}.qp.mbc.mbs.gz\""], "stdout": "Taking an incremental backup\n", "stdout_lines": ["Taking an incremental backup"]} 2025-05-28 17:46:31.280447 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-05-28 17:46:31.281126 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2025-05-28 17:46:31.281904 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-05-28 17:46:31.283242 | orchestrator | mariadb_bootstrap_restart 2025-05-28 17:46:31.355258 | orchestrator | 2025-05-28 17:46:31.355805 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-05-28 17:46:31.357116 | orchestrator | skipping: no hosts matched 2025-05-28 17:46:31.358520 | orchestrator | 2025-05-28 17:46:31.359485 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-05-28 17:46:31.360269 | orchestrator | skipping: no hosts matched 2025-05-28 17:46:31.361415 | orchestrator | 2025-05-28 17:46:31.363106 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-05-28 17:46:31.364357 | orchestrator | skipping: no hosts matched 2025-05-28 17:46:31.365109 | orchestrator | 2025-05-28 17:46:31.365717 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-05-28 17:46:31.366297 | orchestrator | 2025-05-28 17:46:31.366876 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-05-28 17:46:31.367424 | orchestrator | Wednesday 28 May 2025 17:46:31 +0000 (0:00:04.838) 0:00:10.054 ********* 2025-05-28 17:46:31.570249 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:46:31.571060 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:46:31.572240 | orchestrator | 2025-05-28 17:46:31.573015 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-05-28 17:46:31.573998 | orchestrator | Wednesday 28 May 2025 17:46:31 +0000 (0:00:00.214) 0:00:10.268 ********* 2025-05-28 17:46:31.709545 | orchestrator | skipping: [testbed-node-1] 2025-05-28 17:46:31.710308 | orchestrator | skipping: [testbed-node-2] 2025-05-28 17:46:31.712435 | orchestrator | 2025-05-28 17:46:31.712932 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 17:46:31.713374 | orchestrator | 2025-05-28 17:46:31 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-28 17:46:31.713992 | orchestrator | 2025-05-28 17:46:31 | INFO  | Please wait and do not abort execution. 2025-05-28 17:46:31.715891 | orchestrator | testbed-node-0 : ok=5  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-05-28 17:46:31.716544 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-28 17:46:31.717399 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-28 17:46:31.718574 | orchestrator | 2025-05-28 17:46:31.718985 | orchestrator | 2025-05-28 17:46:31.721397 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 17:46:31.723390 | orchestrator | Wednesday 28 May 2025 17:46:31 +0000 (0:00:00.139) 0:00:10.408 ********* 2025-05-28 17:46:31.725977 | orchestrator | =============================================================================== 2025-05-28 17:46:31.726471 | orchestrator | mariadb : Taking incremental database backup via Mariabackup ------------ 4.84s 2025-05-28 17:46:31.727188 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.26s 2025-05-28 17:46:31.727881 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.55s 2025-05-28 17:46:31.731391 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.52s 2025-05-28 17:46:31.732360 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.40s 2025-05-28 17:46:31.733498 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2025-05-28 17:46:31.734719 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.21s 2025-05-28 17:46:31.737039 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.14s 2025-05-28 17:46:32.551872 | orchestrator | ERROR 2025-05-28 17:46:32.552318 | orchestrator | { 2025-05-28 17:46:32.552420 | orchestrator | "delta": "0:03:43.117692", 2025-05-28 17:46:32.552486 | orchestrator | "end": "2025-05-28 17:46:32.287882", 2025-05-28 17:46:32.552544 | orchestrator | "msg": "non-zero return code", 2025-05-28 17:46:32.552595 | orchestrator | "rc": 2, 2025-05-28 17:46:32.552645 | orchestrator | "start": "2025-05-28 17:42:49.170190" 2025-05-28 17:46:32.552692 | orchestrator | } failure 2025-05-28 17:46:32.598461 | 2025-05-28 17:46:32.598678 | PLAY RECAP 2025-05-28 17:46:32.598904 | orchestrator | ok: 23 changed: 10 unreachable: 0 failed: 1 skipped: 3 rescued: 0 ignored: 0 2025-05-28 17:46:32.598984 | 2025-05-28 17:46:32.829773 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-05-28 17:46:32.831932 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-05-28 17:46:33.557618 | 2025-05-28 17:46:33.557776 | PLAY [Post output play] 2025-05-28 17:46:33.574025 | 2025-05-28 17:46:33.574173 | LOOP [stage-output : Register sources] 2025-05-28 17:46:33.642612 | 2025-05-28 17:46:33.642981 | TASK [stage-output : Check sudo] 2025-05-28 17:46:34.507249 | orchestrator | sudo: a password is required 2025-05-28 17:46:34.682806 | orchestrator | ok: Runtime: 0:00:00.022111 2025-05-28 17:46:34.696939 | 2025-05-28 17:46:34.697124 | LOOP [stage-output : Set source and destination for files and folders] 2025-05-28 17:46:34.737775 | 2025-05-28 17:46:34.738244 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-05-28 17:46:34.816600 | orchestrator | ok 2025-05-28 17:46:34.825858 | 2025-05-28 17:46:34.825991 | LOOP [stage-output : Ensure target folders exist] 2025-05-28 17:46:35.274425 | orchestrator | ok: "docs" 2025-05-28 17:46:35.274777 | 2025-05-28 17:46:35.516074 | orchestrator | ok: "artifacts" 2025-05-28 17:46:35.780885 | orchestrator | ok: "logs" 2025-05-28 17:46:35.807421 | 2025-05-28 17:46:35.807653 | LOOP [stage-output : Copy files and folders to staging folder] 2025-05-28 17:46:35.845861 | 2025-05-28 17:46:35.846228 | TASK [stage-output : Make all log files readable] 2025-05-28 17:46:36.142436 | orchestrator | ok 2025-05-28 17:46:36.151879 | 2025-05-28 17:46:36.152069 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-05-28 17:46:36.187127 | orchestrator | skipping: Conditional result was False 2025-05-28 17:46:36.196789 | 2025-05-28 17:46:36.196945 | TASK [stage-output : Discover log files for compression] 2025-05-28 17:46:36.222309 | orchestrator | skipping: Conditional result was False 2025-05-28 17:46:36.229839 | 2025-05-28 17:46:36.229964 | LOOP [stage-output : Archive everything from logs] 2025-05-28 17:46:36.274794 | 2025-05-28 17:46:36.275022 | PLAY [Post cleanup play] 2025-05-28 17:46:36.284277 | 2025-05-28 17:46:36.284410 | TASK [Set cloud fact (Zuul deployment)] 2025-05-28 17:46:36.351738 | orchestrator | ok 2025-05-28 17:46:36.363557 | 2025-05-28 17:46:36.363675 | TASK [Set cloud fact (local deployment)] 2025-05-28 17:46:36.398176 | orchestrator | skipping: Conditional result was False 2025-05-28 17:46:36.413442 | 2025-05-28 17:46:36.413588 | TASK [Clean the cloud environment] 2025-05-28 17:46:37.039509 | orchestrator | 2025-05-28 17:46:37 - clean up servers 2025-05-28 17:46:37.841732 | orchestrator | 2025-05-28 17:46:37 - testbed-manager 2025-05-28 17:46:37.925002 | orchestrator | 2025-05-28 17:46:37 - testbed-node-4 2025-05-28 17:46:38.010980 | orchestrator | 2025-05-28 17:46:38 - testbed-node-2 2025-05-28 17:46:38.096489 | orchestrator | 2025-05-28 17:46:38 - testbed-node-0 2025-05-28 17:46:38.178749 | orchestrator | 2025-05-28 17:46:38 - testbed-node-3 2025-05-28 17:46:38.276056 | orchestrator | 2025-05-28 17:46:38 - testbed-node-5 2025-05-28 17:46:38.368413 | orchestrator | 2025-05-28 17:46:38 - testbed-node-1 2025-05-28 17:46:38.451601 | orchestrator | 2025-05-28 17:46:38 - clean up keypairs 2025-05-28 17:46:38.469890 | orchestrator | 2025-05-28 17:46:38 - testbed 2025-05-28 17:46:38.491628 | orchestrator | 2025-05-28 17:46:38 - wait for servers to be gone 2025-05-28 17:46:49.292646 | orchestrator | 2025-05-28 17:46:49 - clean up ports 2025-05-28 17:46:49.505630 | orchestrator | 2025-05-28 17:46:49 - 05fd1a50-7917-4fa1-8a17-c88eb5d6bd83 2025-05-28 17:46:49.982329 | orchestrator | 2025-05-28 17:46:49 - 09f3103f-bc20-4ad8-9174-ce53191e6ef4 2025-05-28 17:46:50.263993 | orchestrator | 2025-05-28 17:46:50 - 30128105-74e6-4aa0-b24c-bf62958e838e 2025-05-28 17:46:50.473485 | orchestrator | 2025-05-28 17:46:50 - 4af5e29f-7c8d-4c17-b413-ce58a530b3ae 2025-05-28 17:46:50.690411 | orchestrator | 2025-05-28 17:46:50 - 76b944cd-d47d-41bf-a4d3-db252e3439ec 2025-05-28 17:46:50.899441 | orchestrator | 2025-05-28 17:46:50 - 7f5bdaf0-33a7-4ffe-9463-e9bbe4386d8d 2025-05-28 17:46:51.099783 | orchestrator | 2025-05-28 17:46:51 - ce8694d6-5517-4b67-a2d5-e4a517a05a3e 2025-05-28 17:46:51.317030 | orchestrator | 2025-05-28 17:46:51 - clean up volumes 2025-05-28 17:46:51.453011 | orchestrator | 2025-05-28 17:46:51 - testbed-volume-4-node-base 2025-05-28 17:46:51.493768 | orchestrator | 2025-05-28 17:46:51 - testbed-volume-3-node-base 2025-05-28 17:46:51.531689 | orchestrator | 2025-05-28 17:46:51 - testbed-volume-1-node-base 2025-05-28 17:46:51.570342 | orchestrator | 2025-05-28 17:46:51 - testbed-volume-0-node-base 2025-05-28 17:46:51.612143 | orchestrator | 2025-05-28 17:46:51 - testbed-volume-5-node-base 2025-05-28 17:46:51.656066 | orchestrator | 2025-05-28 17:46:51 - testbed-volume-2-node-base 2025-05-28 17:46:51.705932 | orchestrator | 2025-05-28 17:46:51 - testbed-volume-manager-base 2025-05-28 17:46:51.749717 | orchestrator | 2025-05-28 17:46:51 - testbed-volume-0-node-3 2025-05-28 17:46:51.793880 | orchestrator | 2025-05-28 17:46:51 - testbed-volume-1-node-4 2025-05-28 17:46:51.835826 | orchestrator | 2025-05-28 17:46:51 - testbed-volume-7-node-4 2025-05-28 17:46:51.880319 | orchestrator | 2025-05-28 17:46:51 - testbed-volume-2-node-5 2025-05-28 17:46:51.927057 | orchestrator | 2025-05-28 17:46:51 - testbed-volume-6-node-3 2025-05-28 17:46:51.969382 | orchestrator | 2025-05-28 17:46:51 - testbed-volume-3-node-3 2025-05-28 17:46:52.015604 | orchestrator | 2025-05-28 17:46:52 - testbed-volume-5-node-5 2025-05-28 17:46:52.058774 | orchestrator | 2025-05-28 17:46:52 - testbed-volume-8-node-5 2025-05-28 17:46:52.102142 | orchestrator | 2025-05-28 17:46:52 - testbed-volume-4-node-4 2025-05-28 17:46:52.146384 | orchestrator | 2025-05-28 17:46:52 - disconnect routers 2025-05-28 17:46:52.279574 | orchestrator | 2025-05-28 17:46:52 - testbed 2025-05-28 17:46:53.776535 | orchestrator | 2025-05-28 17:46:53 - clean up subnets 2025-05-28 17:46:53.831093 | orchestrator | 2025-05-28 17:46:53 - subnet-testbed-management 2025-05-28 17:46:54.039556 | orchestrator | 2025-05-28 17:46:54 - clean up networks 2025-05-28 17:46:54.201465 | orchestrator | 2025-05-28 17:46:54 - net-testbed-management 2025-05-28 17:46:54.494244 | orchestrator | 2025-05-28 17:46:54 - clean up security groups 2025-05-28 17:46:54.535523 | orchestrator | 2025-05-28 17:46:54 - testbed-management 2025-05-28 17:46:54.655833 | orchestrator | 2025-05-28 17:46:54 - testbed-node 2025-05-28 17:46:54.777450 | orchestrator | 2025-05-28 17:46:54 - clean up floating ips 2025-05-28 17:46:54.814361 | orchestrator | 2025-05-28 17:46:54 - 81.163.193.180 2025-05-28 17:46:55.166375 | orchestrator | 2025-05-28 17:46:55 - clean up routers 2025-05-28 17:46:55.226327 | orchestrator | 2025-05-28 17:46:55 - testbed 2025-05-28 17:46:56.477820 | orchestrator | ok: Runtime: 0:00:19.414194 2025-05-28 17:46:56.482326 | 2025-05-28 17:46:56.482485 | PLAY RECAP 2025-05-28 17:46:56.482607 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-05-28 17:46:56.482669 | 2025-05-28 17:46:56.616807 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-05-28 17:46:56.619510 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-05-28 17:46:57.393628 | 2025-05-28 17:46:57.393791 | PLAY [Cleanup play] 2025-05-28 17:46:57.410447 | 2025-05-28 17:46:57.410648 | TASK [Set cloud fact (Zuul deployment)] 2025-05-28 17:46:57.470696 | orchestrator | ok 2025-05-28 17:46:57.481027 | 2025-05-28 17:46:57.481215 | TASK [Set cloud fact (local deployment)] 2025-05-28 17:46:57.515935 | orchestrator | skipping: Conditional result was False 2025-05-28 17:46:57.537610 | 2025-05-28 17:46:57.537785 | TASK [Clean the cloud environment] 2025-05-28 17:46:58.711539 | orchestrator | 2025-05-28 17:46:58 - clean up servers 2025-05-28 17:46:59.180996 | orchestrator | 2025-05-28 17:46:59 - clean up keypairs 2025-05-28 17:46:59.196390 | orchestrator | 2025-05-28 17:46:59 - wait for servers to be gone 2025-05-28 17:46:59.238271 | orchestrator | 2025-05-28 17:46:59 - clean up ports 2025-05-28 17:46:59.313251 | orchestrator | 2025-05-28 17:46:59 - clean up volumes 2025-05-28 17:46:59.380889 | orchestrator | 2025-05-28 17:46:59 - disconnect routers 2025-05-28 17:46:59.404369 | orchestrator | 2025-05-28 17:46:59 - clean up subnets 2025-05-28 17:46:59.430216 | orchestrator | 2025-05-28 17:46:59 - clean up networks 2025-05-28 17:46:59.600739 | orchestrator | 2025-05-28 17:46:59 - clean up security groups 2025-05-28 17:46:59.637833 | orchestrator | 2025-05-28 17:46:59 - clean up floating ips 2025-05-28 17:46:59.666170 | orchestrator | 2025-05-28 17:46:59 - clean up routers 2025-05-28 17:47:00.078051 | orchestrator | ok: Runtime: 0:00:01.386313 2025-05-28 17:47:00.081269 | 2025-05-28 17:47:00.081402 | PLAY RECAP 2025-05-28 17:47:00.081570 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-05-28 17:47:00.081637 | 2025-05-28 17:47:00.206870 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-05-28 17:47:00.207825 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-05-28 17:47:00.927807 | 2025-05-28 17:47:00.927966 | PLAY [Base post-fetch] 2025-05-28 17:47:00.943501 | 2025-05-28 17:47:00.943629 | TASK [fetch-output : Set log path for multiple nodes] 2025-05-28 17:47:01.009913 | orchestrator | skipping: Conditional result was False 2025-05-28 17:47:01.024728 | 2025-05-28 17:47:01.024955 | TASK [fetch-output : Set log path for single node] 2025-05-28 17:47:01.077530 | orchestrator | ok 2025-05-28 17:47:01.084724 | 2025-05-28 17:47:01.084857 | LOOP [fetch-output : Ensure local output dirs] 2025-05-28 17:47:01.573964 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/86f999a7dd444367bef7a55bf5f49ef2/work/logs" 2025-05-28 17:47:01.873237 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/86f999a7dd444367bef7a55bf5f49ef2/work/artifacts" 2025-05-28 17:47:02.161681 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/86f999a7dd444367bef7a55bf5f49ef2/work/docs" 2025-05-28 17:47:02.184519 | 2025-05-28 17:47:02.184770 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-05-28 17:47:03.118882 | orchestrator | changed: .d..t...... ./ 2025-05-28 17:47:03.119202 | orchestrator | changed: All items complete 2025-05-28 17:47:03.119242 | 2025-05-28 17:47:03.864531 | orchestrator | changed: .d..t...... ./ 2025-05-28 17:47:04.617864 | orchestrator | changed: .d..t...... ./ 2025-05-28 17:47:04.646352 | 2025-05-28 17:47:04.646497 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-05-28 17:47:04.694077 | orchestrator | skipping: Conditional result was False 2025-05-28 17:47:04.709038 | orchestrator | skipping: Conditional result was False 2025-05-28 17:47:04.729121 | 2025-05-28 17:47:04.729224 | PLAY RECAP 2025-05-28 17:47:04.729296 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-05-28 17:47:04.729334 | 2025-05-28 17:47:04.852684 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-05-28 17:47:04.855514 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-05-28 17:47:05.623749 | 2025-05-28 17:47:05.623924 | PLAY [Base post] 2025-05-28 17:47:05.639372 | 2025-05-28 17:47:05.639513 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-05-28 17:47:06.659295 | orchestrator | changed 2025-05-28 17:47:06.668868 | 2025-05-28 17:47:06.669068 | PLAY RECAP 2025-05-28 17:47:06.669159 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-05-28 17:47:06.669237 | 2025-05-28 17:47:06.796446 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-05-28 17:47:06.798967 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-05-28 17:47:07.613277 | 2025-05-28 17:47:07.613451 | PLAY [Base post-logs] 2025-05-28 17:47:07.624071 | 2025-05-28 17:47:07.624200 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-05-28 17:47:08.068483 | localhost | changed 2025-05-28 17:47:08.084411 | 2025-05-28 17:47:08.084598 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-05-28 17:47:08.113538 | localhost | ok 2025-05-28 17:47:08.121381 | 2025-05-28 17:47:08.121593 | TASK [Set zuul-log-path fact] 2025-05-28 17:47:08.150496 | localhost | ok 2025-05-28 17:47:08.165134 | 2025-05-28 17:47:08.165280 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-05-28 17:47:08.194947 | localhost | ok 2025-05-28 17:47:08.201486 | 2025-05-28 17:47:08.201846 | TASK [upload-logs : Create log directories] 2025-05-28 17:47:08.723362 | localhost | changed 2025-05-28 17:47:08.728957 | 2025-05-28 17:47:08.729149 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-05-28 17:47:09.270814 | localhost -> localhost | ok: Runtime: 0:00:00.005395 2025-05-28 17:47:09.276945 | 2025-05-28 17:47:09.277166 | TASK [upload-logs : Upload logs to log server] 2025-05-28 17:47:09.846574 | localhost | Output suppressed because no_log was given 2025-05-28 17:47:09.849804 | 2025-05-28 17:47:09.849959 | LOOP [upload-logs : Compress console log and json output] 2025-05-28 17:47:09.908888 | localhost | skipping: Conditional result was False 2025-05-28 17:47:09.913850 | localhost | skipping: Conditional result was False 2025-05-28 17:47:09.920638 | 2025-05-28 17:47:09.920817 | LOOP [upload-logs : Upload compressed console log and json output] 2025-05-28 17:47:09.967497 | localhost | skipping: Conditional result was False 2025-05-28 17:47:09.968194 | 2025-05-28 17:47:09.971547 | localhost | skipping: Conditional result was False 2025-05-28 17:47:09.985576 | 2025-05-28 17:47:09.985837 | LOOP [upload-logs : Upload console log and json output]